text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Originally Posted 16 February 2017, 11:41 am ESTHi,
I use the C1FlexGrid control for WinForms and have a strange behaviour. In more detail I have set
c1FlexGrid.SelectionMode = SelectionModeEnum.ListBox
Moreover I use the filter property similar to Excel in the column header of the grid. In this case
[...]
for (int k = 0; k < c1FlexGrid.Rows.Selected.Count; k++)
{
var row = c1FlexGrid.Rows.Selected[k];
if (row.DataSource is DisplayRunRow) // 'DisplayRunRow' is a specific class
{
yield return (row.DataSource as DisplayRunRow).Run;
}
}
returns one more row than it is selected. The first item is not shown in the grid and should therefore not be part of the selected rows.
Best wishes
Markus Wendt | https://www.grapecity.com/en/forums/winforms-edition/c1flexgrid-selected-row_1 | CC-MAIN-2019-04 | refinedweb | 114 | 62.85 |
Documenting¶
This guide is for developers who write API documentation. To build the programming manuals run
repo.{sh|bat} docs -o
NOTE: You must have successfully completed a debug build of the repo before you can build the manuals for Python. This is due to the documentation being extracted from the
.pydand
.pyfiles in the
_buildfolder. Run
build --debug-onlyfrom the root of the repo if you haven’t done this already.
in the repo and you will find the output under
_build/docs/kit-sdk/latest. Open the
index.html.
Documenting Python API¶
The best way to document our Python API is to do so directly in the code. That way it’s always extracted from a location where it’s closest to the actual code and most likely to be correct. We have two scenarios to consider:
Python code
C++ code that is exposed to Python
For both of these cases we need to write our documentation in the Python Docstring format (see PEP 257 for background). In a perfect world we would be able to use exactly the same approach, regardless of whether the Python API was written in Python or coming from C++ code that is exposing Python bindings via pybind11. Our world is unfortunately not perfect here but it’s quite close; most of the approach is the same - we will highlight when a different approach is required for the two cases of Python code and C++ code exposed to Python.
Instead of using the older and more cumbersome restructredText Docstring specification we have adopted the more streamlined Google Python Style Docstring format. This is how you would document an API function in Python:
from typing import Optional def answer_question(question: str) -> Optional[str]: """This function can answer some questions. It currently only answers a limited set of questions so don't expect it to know everything. Args: question: The question passed to the function, trailing question mark is not necessary and casing is not important. Returns: The answer to the question or ``None`` if it doesn't know the answer. """ if question.lower().startswith("what is the answer to life, universe, and everything"): return str(42) else: return None
After running the documentation generation system we will get this as the output (assuming the above was in a module named carb):
There are a few things you will notice:
We use the Python type hints (introduced in Python 3.5) in the function signature so we don’t need to write any of that information in the docstring. An additional benefit of this approach is that many Python IDEs can utilize this information and perform type checking when programming against the API. Notice that we always do
from typing import ...so we never have to prefix with
typingnamespace when referring to
List,
Union,
Dict, and friends. This is the common approach in the Python community.
The high-level structure is essentially in four parts:
A one-liner describing the function (without details or corner cases)
A paragraph that gives more detail on the function behavior (if necessary)
An
Args:section (if the function takes arguments, note that
selfis not considered an argument)
A
Returns:section (if the function can return somethings other than
None)
Before we discuss the other bits to document (modules and module attributes) let’s examine how we would document the very same function if it was written in C++ and exposed to Python using pybind11.
m.def("answer_question", &answerQuestion, py::arg("question"), R"( This function can answer some questions. It currently only answers a limited set of questions so don't expect it to know everything. Args: question: The question passed to the function, trailing question mark is not necessary and casing is not important. Returns: The answer to the question or empty string if it doesn't know the answer.)");
The outcome is identical to what we saw from the Python source code, except that we cannot return optionally a string in C++.
We want to draw you attention to the following:
pybind11 generates the type information for you, based on the C++ types. The
py::argobject must be used to get properly named arguments into the function signature (see pybind11 documentation) - otherwise you just get arg0 and so forth in the documentation.
Indentation is key when writing docstrings. The documentation system is clever enough to remove uniform indentation. That is, as long as all the lines have the same amount of padding that padding will be ignored and not passed onto the restructured text processor. Fortunately clang-format leaves this funky formatting alone - respecting the raw string qualifier.
Let’s now turn our attention to how we document modules and their attributes. We should of course only document modules that are part of our API (not internal helper modules) and only public attributes. Below is a detailed example:
"""Example of Google style docstrings for module.. module_level_variable2 (Optional[str]): Use objects from typing, such as Optional, to annotate the type properly. module_level_variable4 (Optional[File]): We can resolve type references to other objects that are built as part of the documentation. This will link to `carb.filesystem.File`. Todo: * For module TODOs if you want them * These can be useful if you want to communicate any shortcomings in the module we plan to address .. _Google Python Style Guide: """ module_level_variable1 = 12345 module_level_variable3 = 98765 """int: Module level variable documented inline. The type hint should be specified on the first line, separated by a colon from the text. This approach may be preferable since it keeps the documentation closer to the code and the default assignment is shown. A downside is that the variable will get alphabetically sorted among functions in the module so won't have the same cohesion as the approach above.""" module_level_variable2 = None module_level_variable4 = None
This is what the documentation would look like:
As we have mentioned we should not mix the
Attributes: style of documentation with inline documentation of attributes.
Notice how
module_level_variable3 appears in a separate block from all the other attributes that were documented. It
is even after the TODO section. Chose one approach for your module and stick to it. There are valid reasons to pick
one style above the other but don’t cross the streams! As before we use type hints from
typing but we don’t use
the typing syntax to attach them. We write:
"""... Attributes: module_variable (Optional[str]): This is important ... """
or
module_variable = None """Optional[str]: This is important ..."""
But we don’t write:
from typing import Optional module_variable: Optional[str] = 12345 """This is important ..."""
This is because the last form (which was introduced in Python 3.6) is still poorly supported by tools - including our documentation system. It also doesn’t work with Python bindings generated from C++ code using pybind11.
For instructions on how to document classes, exceptions, etc please consult the Sphinx Napoleon Extension Guide.
Adding New Python Modules¶
When adding a new python binding module and documenting it, some extra steps must be taken to get the documentation build to pick up the new module. By default, Sphinx will only pick up documentation for modules that it is explicitly told to. These modules are listed in the table of contents file located at:
docs/manuals/py/api/core/index.rst
Once the new module has been added here (alphabetical order please!), a new index file must also be created for the
new module. This is in the same directory as
index.rst and must be named using the python module name and
have a
.rst extension. The new file should have the following contents (where
<moduleName> is the
python module name):
<moduleName> module ##################### .. automodule:: <moduleName> :platform: Windows-x86_64, Linux-x86_64 :members: :undoc-members: :imported-members: | https://docs.omniverse.nvidia.com/py/kit/docs/Documenting.html | CC-MAIN-2022-33 | refinedweb | 1,287 | 53.41 |
Question
What is the market multiple and how can it help in evaluating a stock’s P/E ratio? Is a stock’s relative P/E the same thing as the market multiple? Explain.
Answer to relevant QuestionsIn the stock valuation framework, how can you tell whether a particular security is a worthwhile investment candidate? What roles does the required rate of return play in this process? Would you invest in a stock if all you ...In this chapter, we examined 9 stock valuation procedures: • Zero-growth DVM • Constant-growth DVM • Variable-growth DVM • Dividends-and-earnings (D&E) approach • Expected return (IRR) approach • P/E approach • ...The price of Myrtle’s Plumbing Supply Co. is now $80. The company pays no dividends. Ms. Bossard expects the price 4 years from now to be $110 per share. Should she buy Myrtle’s Plumbing stock if she desires a 10% rate ...You’re thinking about buying some stock in Affiliated Computer Corporation and want to use the P/E approach to value the shares. You’ve estimated that next year’s earnings should come in at about $4.00 a share. In ...Amalgamated Aircraft Parts, Inc., is expected to pays a dividend of $1.50 in the coming year. The required rate of return is 16%, and dividends are expected to grow at 7% per year. Using the dividend valuation model, find ...
Post your question | http://www.solutioninn.com/what-is-the-market-multiple-and-how-can-it-help | CC-MAIN-2016-50 | refinedweb | 234 | 68.97 |
#include <vote_routerstatus_st.h>
The claim about a single router, made in a vote.
Definition at line 14 of file vote_routerstatus_st.h.
Ed25519 identity for this router, or zero if it has none.
Definition at line 38 of file vote_routerstatus_st.h.
True if the Ed25519 listing here is the consensus-opinion for the Ed25519 listing; false if there was no consensus on Ed25519 key status, or if this VRS doesn't reflect it.
Definition at line 33 of file vote_routerstatus_st.h.
Bit-field for all recognized flags; index into networkstatus_t.known_flags.
Definition at line 20 of file vote_routerstatus_st.h.
True iff the vote included an entry for ed25519 ID, or included "id ed25519 none" to indicate that there was no ed25519 ID.
Definition at line 29 of file vote_routerstatus_st.h.
The vote had a measured bw
Definition at line 26 of file vote_routerstatus_st.h.
Referenced by measured_bw_line_apply().
Measured bandwidth (capacity) of the router
Definition at line 34 of file vote_routerstatus_st.h.
Referenced by measured_bw_line_apply().
The hash or hashes that the authority claims this microdesc has.
Definition at line 36 of file vote_routerstatus_st.h.
Referenced by vote_routerstatus_free_().
The protocols that this authority says this router provides.
Definition at line 24 of file vote_routerstatus_st.h.
Referenced by vote_routerstatus_free_().
Underlying 'status' object for this router. Flags are redundant.
Definition at line 15 of file vote_routerstatus_st.h.
Referenced by compare_digest_to_vote_routerstatus_entry(), compare_vote_rs(), compute_routerstatus_consensus(), and vote_routerstatus_free_().
The version that the authority says this router is running.
Definition at line 22 of file vote_routerstatus_st.h.
Referenced by vote_routerstatus_free_(). | https://people.torproject.org/~nickm/tor-auto/doxygen/structvote__routerstatus__t.html | CC-MAIN-2019-39 | refinedweb | 250 | 54.18 |
I'm currently keeping track of the large scale digitization of video tapes and need help pulling data from multiple CSVs. Most tapes have multiple copies, but we only digitize one tape from the set. I would like to create a new CSV containing only tapes of shows that have yet to be digitized. Here's a mockup of my original CSV:
Date Digitized | Series | Episode Number | Title | Format
---------------|----------|----------------|-------|--------
01-01-2016 | Series A | 101 | | VHS
| Series A | 101 | | Beta
| Series A | 101 | | U-Matic
| Series B | 101 | | VHS
import csv, glob
names = glob.glob("*.csv")
names = [os.path.splitext(each)[0] for each in names]
for name in names:
with open("%s_.csv" % name, "rb") as source:
reader = csv.reader( source )
with open("%s_edit.csv" % name,"wb") as result:
writer = csv.writer( result )
for row in reader:
if row[0]:
series = row[1]
epnum = row[2]
if row[1] != series and row[2] != epnum:
writer.writerow(row)
The simplest approach is to make two reads of the set of CSV files: one to build a list of all digitized tapes, the second to build a unique list of all tapes not on the digitized list:
# build list of digitized tapes digitized = [] for name in names: with open("%s_.csv" % name, "rb") as source: reader = csv.reader(source) next(reader) # skip header for row in reader: if row[0] and ((row[1], row[2]) not in digitized): digitized.append((row[1], row[2])) # build list of non-digitized tapes digitize_me = [] for name in names: with open("%s_.csv" % name, "rb") as source: reader = csv.reader(source) header = next(reader)[1:3] # skip / save header for row in reader: if not row[0] and ((row[1], row[2]) not in digitized + digitize_me): digitize_me.append((row[1], row[2])) # write non-digitized tapes to 'digitize.csv` with open("digitize.csv","wb") as result: writer = csv.writer(result) writer.writerow(header) for tape in digitize_me: writer.writerow(tape)
input file 1:
Date Digitized,Series,Episode Number,Title,Format 01-01-2016,Series A,101,,VHS ,Series A,101,,Beta ,Series C,101,,Beta ,Series D,102,,VHS ,Series B,101,,U-Matic
input file 2:
Date Digitized,Series,Episode Number,Title,Format ,Series B,101,,VHS ,Series D,101,,Beta 01-01-2016,Series C,101,,VHS
Output:
Series,Episode Number Series D,102 Series B,101 Series D,101
As per OP comment, the line
header = next(reader)[1:3] # skip / save header
serves two purposes:
csvfile starts with a header, we do not want to read that header row as if it contained data about our tapes, so we need to "skip" the header row in that sense
csvfile. We want that file to have a header as well. Since we are only writing the
seriesand
episode number, which are
rowfields
1and
2, we assign just that slice, i.e.
[1:3], of the header row to the
headervariable
It's not really standard to have a line of code serve two pretty unrelated purposes like that, which is why I commented it. It also assigns to
header multiple times (assuming multiple input files) when
header only needs to be assigned once. Perhaps a cleaner way to write that section would be:
# build list of non-digitized tapes digitize_me = [] header = None for name in names: with open("%s_.csv" % name, "rb") as source: reader = csv.reader(source) if header: next(reader) # skip header else: header = next(reader)[1:3] # read header for row in reader: ...
It's a question of which form is more readable. Either way is close but I thought combining 5 lines into one keeps the focus on the more salient parts of the code. I would probably do it the other way next time. | https://codedump.io/share/xekXqTzuFJjT/1/python-csv-how-to-ignore-writing-similar-rows-given-one-row-meets-a-condition | CC-MAIN-2017-34 | refinedweb | 627 | 63.7 |
java.util.concurrent.ScheduledExecutorService; 20 import java.util.concurrent.ScheduledFuture; 21 import java.util.concurrent.ScheduledThreadPoolExecutor; 22 import java.util.concurrent.TimeUnit; 23 24 import org.apache.commons.lang3.Validate; 25 26 /** 27 * <p> 28 * A specialized <em>semaphore</em> implementation that provides a number of 29 * permits in a given time frame. 30 * </p> 31 * <p> 32 * This class is similar to the {@code java.util.concurrent.Semaphore} class 33 * provided by the JDK in that it manages a configurable number of permits. 34 * Using the {@link #acquire()} method a permit can be requested by a thread. 35 * However, there is an additional timing dimension: there is no {@code 36 * release()} method for freeing a permit, but all permits are automatically 37 * released at the end of a configurable time frame. If a thread calls 38 * {@link #acquire()} and the available permits are already exhausted for this 39 * time frame, the thread is blocked. When the time frame ends all permits 40 * requested so far are restored, and blocking threads are waked up again, so 41 * that they can try to acquire a new permit. This basically means that in the 42 * specified time frame only the given number of operations is possible. 43 * </p> 44 * <p> 45 * A use case for this class is to artificially limit the load produced by a 46 * process. As an example consider an application that issues database queries 47 * on a production system in a background process to gather statistical 48 * information. This background processing should not produce so much database 49 * load that the functionality and the performance of the production system are 50 * impacted. Here a {@code TimedSemaphore} could be installed to guarantee that 51 * only a given number of database queries are issued per second. 52 * </p> 53 * <p> 54 * A thread class for performing database queries could look as follows: 55 * </p> 56 * 57 * <pre> 58 * public class StatisticsThread extends Thread { 59 * // The semaphore for limiting database load. 60 * private final TimedSemaphore semaphore; 61 * // Create an instance and set the semaphore 62 * public StatisticsThread(TimedSemaphore timedSemaphore) { 63 * semaphore = timedSemaphore; 64 * } 65 * // Gather statistics 66 * public void run() { 67 * try { 68 * while(true) { 69 * semaphore.acquire(); // limit database load 70 * performQuery(); // issue a query 71 * } 72 * } catch(InterruptedException) { 73 * // fall through 74 * } 75 * } 76 * ... 77 * } 78 * </pre> 79 * 80 * <p> 81 * The following code fragment shows how a {@code TimedSemaphore} is created 82 * that allows only 10 operations per second and passed to the statistics 83 * thread: 84 * </p> 85 * 86 * <pre> 87 * TimedSemaphore sem = new TimedSemaphore(1, TimeUnit.SECOND, 10); 88 * StatisticsThread thread = new StatisticsThread(sem); 89 * thread.start(); 90 * </pre> 91 * 92 * <p> 93 * When creating an instance the time period for the semaphore must be 94 * specified. {@code TimedSemaphore} uses an executor service with a 95 * corresponding period to monitor this interval. The {@code 96 * ScheduledExecutorService} to be used for this purpose can be provided at 97 * construction time. Alternatively the class creates an internal executor 98 * service. 99 * </p> 100 * <p> 101 * Client code that uses {@code TimedSemaphore} has to call the 102 * {@link #acquire()} method in aach processing step. {@code TimedSemaphore} 103 * keeps track of the number of invocations of the {@link #acquire()} method and 104 * blocks the calling thread if the counter exceeds the limit specified. When 105 * the timer signals the end of the time period the counter is reset and all 106 * waiting threads are released. Then another cycle can start. 107 * </p> 108 * <p> 109 * It is possible to modify the limit at any time using the 110 * {@link #setLimit(int)} method. This is useful if the load produced by an 111 * operation has to be adapted dynamically. In the example scenario with the 112 * thread collecting statistics it may make sense to specify a low limit during 113 * day time while allowing a higher load in the night time. Reducing the limit 114 * takes effect immediately by blocking incoming callers. If the limit is 115 * increased, waiting threads are not released immediately, but wake up when the 116 * timer runs out. Then, in the next period more processing steps can be 117 * performed without blocking. By setting the limit to 0 the semaphore can be 118 * switched off: in this mode the {@link #acquire()} method never blocks, but 119 * lets all callers pass directly. 120 * </p> 121 * <p> 122 * When the {@code TimedSemaphore} is no more needed its {@link #shutdown()} 123 * method should be called. This causes the periodic task that monitors the time 124 * interval to be canceled. If the {@code ScheduledExecutorService} has been 125 * created by the semaphore at construction time, it is also shut down. 126 * resources. After that {@link #acquire()} must not be called any more. 127 * </p> 128 * 129 * @since 3.0 130 */ 131 public class TimedSemaphore { 132 /** 133 * Constant for a value representing no limit. If the limit is set to a 134 * value less or equal this constant, the {@code TimedSemaphore} will be 135 * effectively switched off. 136 */ 137 public static final int NO_LIMIT = 0; 138 139 /** Constant for the thread pool size for the executor. */ 140 private static final int THREAD_POOL_SIZE = 1; 141 142 /** The executor service for managing the timer thread. */ 143 private final ScheduledExecutorService executorService; 144 145 /** Stores the period for this timed semaphore. */ 146 private final long period; 147 148 /** The time unit for the period. */ 149 private final TimeUnit unit; 150 151 /** A flag whether the executor service was created by this object. */ 152 private final boolean ownExecutor; 153 154 /** A future object representing the timer task. */ 155 private ScheduledFuture<?> task; // @GuardedBy("this") 156 157 /** Stores the total number of invocations of the acquire() method. */ 158 private long totalAcquireCount; // @GuardedBy("this") 159 160 /** 161 * The counter for the periods. This counter is increased every time a 162 * period ends. 163 */ 164 private long periodCount; // @GuardedBy("this") 165 166 /** The limit. */ 167 private int limit; // @GuardedBy("this") 168 169 /** The current counter. */ 170 private int acquireCount; // @GuardedBy("this") 171 172 /** The number of invocations of acquire() in the last period. */ 173 private int lastCallsPerPeriod; // @GuardedBy("this") 174 175 /** A flag whether shutdown() was called. */ 176 private boolean shutdown; // @GuardedBy("this") 177 178 /** 179 * Creates a new instance of {@link TimedSemaphore} and initializes it with 180 * the given time period and the limit. 181 * 182 * @param timePeriod the time period 183 * @param timeUnit the unit for the period 184 * @param limit the limit for the semaphore 185 * @throws IllegalArgumentException if the period is less or equals 0 186 */ 187 public TimedSemaphore(final long timePeriod, final TimeUnit timeUnit, final int limit) { 188 this(null, timePeriod, timeUnit, limit); 189 } 190 191 /** 192 * Creates a new instance of {@link TimedSemaphore} and initializes it with 193 * an executor service, the given time period, and the limit. The executor 194 * service will be used for creating a periodic task for monitoring the time 195 * period. It can be <b>null</b>, then a default service will be created. 196 * 197 * @param service the executor service 198 * @param timePeriod the time period 199 * @param timeUnit the unit for the period 200 * @param limit the limit for the semaphore 201 * @throws IllegalArgumentException if the period is less or equals 0 202 */ 203 public TimedSemaphore(final ScheduledExecutorService service, final long timePeriod, 204 final TimeUnit timeUnit, final int limit) { 205 Validate.inclusiveBetween(1, Long.MAX_VALUE, timePeriod, "Time period must be greater than 0!"); 206 207 period = timePeriod; 208 unit = timeUnit; 209 210 if (service != null) { 211 executorService = service; 212 ownExecutor = false; 213 } else { 214 final ScheduledThreadPoolExecutor s = new ScheduledThreadPoolExecutor( 215 THREAD_POOL_SIZE); 216 s.setContinueExistingPeriodicTasksAfterShutdownPolicy(false); 217 s.setExecuteExistingDelayedTasksAfterShutdownPolicy(false); 218 executorService = s; 219 ownExecutor = true; 220 } 221 222 setLimit(limit); 223 } 224 225 /** 226 * Returns the limit enforced by this semaphore. The limit determines how 227 * many invocations of {@link #acquire()} are allowed within the monitored 228 * period. 229 * 230 * @return the limit 231 */ 232 public final synchronized int getLimit() { 233 return limit; 234 } 235 236 /** 237 * Sets the limit. This is the number of times the {@link #acquire()} method 238 * can be called within the time period specified. If this limit is reached, 239 * further invocations of {@link #acquire()} will block. Setting the limit 240 * to a value <= {@link #NO_LIMIT} will cause the limit to be disabled, 241 * i.e. an arbitrary number of{@link #acquire()} invocations is allowed in 242 * the time period. 243 * 244 * @param limit the limit 245 */ 246 public final synchronized void setLimit(final int limit) { 247 this.limit = limit; 248 } 249 250 /** 251 * Initializes a shutdown. After that the object cannot be used any more. 252 * This method can be invoked an arbitrary number of times. All invocations 253 * after the first one do not have any effect. 254 */ 255 public synchronized void shutdown() { 256 if (!shutdown) { 257 258 if (ownExecutor) { 259 // if the executor was created by this instance, it has 260 // to be shutdown 261 getExecutorService().shutdownNow(); 262 } 263 if (task != null) { 264 task.cancel(false); 265 } 266 267 shutdown = true; 268 } 269 } 270 271 /** 272 * Tests whether the {@link #shutdown()} method has been called on this 273 * object. If this method returns <b>true</b>, this instance cannot be used 274 * any longer. 275 * 276 * @return a flag whether a shutdown has been performed 277 */ 278 public synchronized boolean isShutdown() { 279 return shutdown; 280 } 281 282 /** 283 * Tries to acquire a permit from this semaphore. This method will block if 284 * the limit for the current period has already been reached. If 285 * {@link #shutdown()} has already been invoked, calling this method will 286 * cause an exception. The very first call of this method starts the timer 287 * task which monitors the time period set for this {@code TimedSemaphore}. 288 * From now on the semaphore is active. 289 * 290 * @throws InterruptedException if the thread gets interrupted 291 * @throws IllegalStateException if this semaphore is already shut down 292 */ 293 public synchronized void acquire() throws InterruptedException { 294 if (isShutdown()) { 295 throw new IllegalStateException("TimedSemaphore is shut down!"); 296 } 297 298 if (task == null) { 299 task = startTimer(); 300 } 301 302 boolean canPass = false; 303 do { 304 canPass = getLimit() <= NO_LIMIT || acquireCount < getLimit(); 305 if (!canPass) { 306 wait(); 307 } else { 308 acquireCount++; 309 } 310 } while (!canPass); 311 } 312 313 /** 314 * Returns the number of (successful) acquire invocations during the last 315 * period. This is the number of times the {@link #acquire()} method was 316 * called without blocking. This can be useful for testing or debugging 317 * purposes or to determine a meaningful threshold value. If a limit is set, 318 * the value returned by this method won't be greater than this limit. 319 * 320 * @return the number of non-blocking invocations of the {@link #acquire()} 321 * method 322 */ 323 public synchronized int getLastAcquiresPerPeriod() { 324 return lastCallsPerPeriod; 325 } 326 327 /** 328 * Returns the number of invocations of the {@link #acquire()} method for 329 * the current period. This may be useful for testing or debugging purposes. 330 * 331 * @return the current number of {@link #acquire()} invocations 332 */ 333 public synchronized int getAcquireCount() { 334 return acquireCount; 335 } 336 337 /** 338 * Returns the number of calls to the {@link #acquire()} method that can 339 * still be performed in the current period without blocking. This method 340 * can give an indication whether it is safe to call the {@link #acquire()} 341 * method without risking to be suspended. However, there is no guarantee 342 * that a subsequent call to {@link #acquire()} actually is not-blocking 343 * because in the mean time other threads may have invoked the semaphore. 344 * 345 * @return the current number of available {@link #acquire()} calls in the 346 * current period 347 */ 348 public synchronized int getAvailablePermits() { 349 return getLimit() - getAcquireCount(); 350 } 351 352 /** 353 * Returns the average number of successful (i.e. non-blocking) 354 * {@link #acquire()} invocations for the entire life-time of this {@code 355 * TimedSemaphore}. This method can be used for instance for statistical 356 * calculations. 357 * 358 * @return the average number of {@link #acquire()} invocations per time 359 * unit 360 */ 361 public synchronized double getAverageCallsPerPeriod() { 362 return periodCount == 0 ? 0 : (double) totalAcquireCount 363 / (double) periodCount; 364 } 365 366 /** 367 * Returns the time period. This is the time monitored by this semaphore. 368 * Only a given number of invocations of the {@link #acquire()} method is 369 * possible in this period. 370 * 371 * @return the time period 372 */ 373 public long getPeriod() { 374 return period; 375 } 376 377 /** 378 * Returns the time unit. This is the unit used by {@link #getPeriod()}. 379 * 380 * @return the time unit 381 */ 382 public TimeUnit getUnit() { 383 return unit; 384 } 385 386 /** 387 * Returns the executor service used by this instance. 388 * 389 * @return the executor service 390 */ 391 protected ScheduledExecutorService getExecutorService() { 392 return executorService; 393 } 394 395 /** 396 * Starts the timer. This method is called when {@link #acquire()} is called 397 * for the first time. It schedules a task to be executed at fixed rate to 398 * monitor the time period specified. 399 * 400 * @return a future object representing the task scheduled 401 */ 402 protected ScheduledFuture<?> startTimer() { 403 return getExecutorService().scheduleAtFixedRate(new Runnable() { 404 @Override 405 public void run() { 406 endOfPeriod(); 407 } 408 }, getPeriod(), getPeriod(), getUnit()); 409 } 410 411 /** 412 * The current time period is finished. This method is called by the timer 413 * used internally to monitor the time period. It resets the counter and 414 * releases the threads waiting for this barrier. 415 */ 416 synchronized void endOfPeriod() { 417 lastCallsPerPeriod = acquireCount; 418 totalAcquireCount += acquireCount; 419 periodCount++; 420 acquireCount = 0; 421 notifyAll(); 422 } 423 } | http://commons.apache.org/proper/commons-lang/xref/org/apache/commons/lang3/concurrent/TimedSemaphore.html | CC-MAIN-2016-40 | refinedweb | 2,248 | 53 |
Wiki
nltk-extras / Home
NLTK Extras
Currently, this contains probability.py, a module for building FreqDists on top of Redis. Also included is a copy of the python redis.py interface. It is based on the 0.9.9 release of NLTK and 0.096 release of Redis. Because it relies on the internal implementation of FreqDist, and given that Redis is still very much in beta, I can't guarantee it'll work with future versions of NLTK and/or Redis.
RedisFreqDist
RedisFreqDist works just like NLTK's FreqDist, but stores samples and frequency counts as keys and values. That means samples must be strings. And of course, you'll need a running redis-server for
RedisFreqDist to work. Below is a simple example function for creating a
RedisFreqDist and counting samples.
def make_freq_dist(samples, host='localhost', port=6379, db=0): freqs = RedisFreqDist(host=host, port=port, db=db) for sample in samples: freqs.inc(sample)
All of the other FreqDist functions are supported, allowing you to get a list of all the samples, and to lookup the frequency count of each sample. For more info, see my article Building a NLTK FreqDist on Redis.
ConditionalRedisFreqDist
probability.py also includes
ConditionalRedisFreqDist, which works just like NLTK's ConditionalFreqDist. There's also
RedisConditionalFreqDist for when you have a large number of conditions, but it's not very well tested.
Updated | https://bitbucket.org/japerk/nltk-extras/wiki/Home | CC-MAIN-2017-47 | refinedweb | 230 | 50.02 |
Dino Esposito
March 22, 2001
I don't know about you, but I didn't have a prompt reply ready when an existential question like, "What is software exactly?" hit me.
Consider the scene—you're in a tourist shop totally immersed in the very delicate task of buying useless things (mainly souvenirs) to make friends and relatives happy when they pick you up at the airport. In such cases, you're ritually asked a question like "First time here? Is it for business or vacation?"
So if you're somehow involved with software, and aren't on vacation, chances are you have to face that existential question.
So what is software about—exactly?
It's hard to answer existential questions, especially if you're walking around with a bag full of postcards, koala-pictured bibs, kangaroo peluches, and yellow signage warning about crocodiles.
I tried to keep my thoughts free flowing, but substantially simple. First off, software is about computers. Software is also about evolution. Certainly, software is about data and, in particular, about data storage and manipulation.
When I arrived to the hotel, my thoughts were processing the following point—what kind of evolution have I observed in recent years for data storage and usage? So I started meditating on OLE DB and its evolution in light of .NET.
Historically speaking, ODBC was the first serious attempt to create a uniform way in which applications access databases. As with everything else in software, ODBC was designed to meet a specific demand. It set a new stage in the never ending evolutionary process of information technology.
ODBC had to provide a common, and hopefully abstract, API to access databases irrespective of their internal details, languages, and tables organization. Over time, though, ODBC revealed to be progressively less and less adequate to successfully face the new incoming ways to design and build data-driven applications.
Since software makes no exception to Darwinism, ODBC adapted and survived with a different name, a different programming model, and new hot functionalities, but preserved its true vocation. ODBC continued to provide (more or less) open database connectivity under the name and the functions of OLE DB.
OLE DB is the programming interface that puts into practice the theoretical concepts of the Microsoft Universal Data Access (UDA) strategy. UDA provides the ability to access any type of data—relational, non-relational, and hierarchical—through a single, COM-based programming interface.
Designed as a component technology, OLE DB features a multi-layered model. On one side, you find server components that hold the data. On the other side of the COM bridge, you have client components that know how to connect and request data. The former are called OLE DB data providers; the latter are, instead, known as OLE DB consumers.
Both consumers and providers are COM objects and talk to each other through a set of COM interfaces. Such COM-based communication can be summarized in terms of actions performed on abstract objects like DataSource, Session, Command, and Rowset. So it happens that a consumer connects to a DataSource, opens a Session, issues a Command, and brings home a Rowset of data.
The Darwinistic evolution from ODBC brought UDA and OLE DB to add the ability to glue together, almost like a single relational table, all the enterprise data, irrespective of their relational, non-relational or even hierarchical nature.
When it comes to data access, you have two basic choices. One is universalizing the data access strategy as UDA allows you to do. The other, instead, is geared towards the universalization of the data structure. It forces you to move every bit of information you may have out of its current data store and into a single, all-encompassing database server.
With OLE DB you try to glue together what your clients have today. Going the other way round, you force clients to scale up to a new, more powerful, unique DBMS, which has the ability to manipulate any format of information you need.
OLE DB is much more independent from the physical structure of the data than ODBC. In addition, it is not strictly based on SQL. OLE DB commands can be SQL statements as well as something else. In general, they can be seen as text strings written according to any syntax that the target provider can understand.
Like ODBC, OLE DB was designed with C++ in mind to maximize the performance of data access in middle-tier modules. For these same reasons, OLE DB is not directly usable from Visual Basic® or ASP.
Countless distributed systems, instead, were supposed to use Visual Basic to build components. That's the main reason why Microsoft introduced the ActiveX® Data Objects (ADO) library.
ADO has a richer programming interface than the raw OLE DB SDK. While it's definitely possible to use ADO from C++ applications, OLE DB calls pass through less layers of code and go more directly down to the data than the corresponding ADO code.
While ADO is clearly built on top of OLE DB, calls to raw OLE DB interfaces and calls issued through the ADO runtime have different relative speeds. This fact originated a sort of language-based dicotomy. What's better and more recommendable? The C++ high-performance level of OLE DB or the easier, more forgiving model of ADO in Visual Basic components?
Aside from providers and consumers, the OLE DB model also includes a third element—the OLE DB Services. A service is a COM component that processes the Rowset being returned to the consumer. It works as a sort of hook that monitors all the traffic between the consumer and the provider. ADO heavily relies on OLE DB Services to add its extended functionality like data shaping, persistence, and disconnected recordsets.
As people got serious about building distributed COM-based applications, a number of best practices have been developed for particular fields. To improve the scalability of Web applications, people turned to the data access disconnected model.
In a nutshell, the data consumer and the data provider aren't connected all the time. Once the connection is established, you issue the given query, fetch records out to an in-memory repository, and disconnect from the data source. You work offline on those records and, if needed, reconnect later and submit your changes. This model doesn't just work for everyone. Whenever it makes sense, though, it turns out to be extremely valuable in terms of scalability gains and overall performance.
A lot of systems out there have been (re)converted to employ ADO recordsets through the client-side cursor service which enables data disconnection. OLE DB has not been specifically thought of as such a model of interaction, so ADO is extended through an intermediate OLE DB service.
Thanks to the inherent flexibility of its architecture, OLE DB can be successfully used in a disconnected scenario, but it certainly doesn't represent the best way of working. Another subtle limitation of this implementation is that ADO recordsets are relied on to do so many things that there's a suspicion that they can't always do everything well. How can such an object be the fastest you can have to work both connected and disconnected, with and without XML, when fabricated or loaded from disk?
In addition, with OLE DB you have a significant lack of consistency if you consider that the bag of ADO functionality is remarkably different from the raw OLE DB SDK.
Thus ADO.NET becomes the next step in the evolutionary process of data access technologies. As the name suggests, though, ADO.NET is seemingly only the successor of ADO. What about OLE DB in .NET?
The timeless laws of Darwinism are now forcing the OLE DB technology to move one step forward to meet the demand of new users . In .NET, a Web application is primarily a disconnected application that makes use of freshly designed, ad-hoc tools for managing data.
The .NET Framework makes available classes to work with data. Such classes—specifically the ADO.NET and the XML namespaces—provide for collecting, reading, and writing. The ADO.NET and XML subsystems end up replacing both ADO and the OLE DB SDK so that now you have one single, and language neutral, way to getting or setting data.
ADO.NET classes abstract the data source even better than ADO because of their clearly data-centric design, opposite of the database-centric model still visible in ADO.
The .NET counterpart of OLE DB providers are called managed providers. Their role is explained in the picture below.
Figure 1. The managed provider's architectural diagram
As in OLE DB, you can recognize two interacting layers that, by assonance, I have called the managed consumer layer and the managed provider layer. To manipulate data, your .NET application doesn't need to seek out special classes or components acting as consumer modules.
A .NET application merely utilizes the DataSet or the DataReader object from the native framework and immediately becomes a "managed" data consumer. To physically fetch the data, you use instances of special classes that inherit from DataSetCommand and DBCommand. These classes represent your link to the data source.
Instead of instructing a rather generic object to go against a given provider, you simply use derived classes that already know how to cope with that given provider. So it happens that SQLDataSetCommand handles SQL Server databases and ADODataSetCommand wraps all existing OLE DB providers.
The managed provider is buried into such DataSetCommand classes. You never realize their presence and never need to specifically know about them. You use classes, set properties, and you're happy.
Under the hood, the managed provider layer in the figure above utilizes an interaction model not much different from the one in use with OLE DB, and even earlier with ODBC. The consumer command class targets a particular component that wraps a data source. It knows about the protocol that isused to read and write rows on the source. It also returns results in a format that .NET classes know perfectly how to handle.
For a better understanding, let's review the elements that concur to data retrieval in both OLE DB and .NET.
Table 1. Comparing OLE DB and .NET data providers
The target provider is identified through its COM progID in OLE DB. In .NET, instead, such details are buried in the accessor class.
OLE DB providers always return rowsets—COM objects mainly exposing the IRowset interface. If you're accessing data through ADO, rowsets are then translated into richer and scriptable objects called recordsets.
.NET applications simply use different classes with different capabilities. The DataReader class is a simple, fast, forward-only cursor that works connected and provides access on a per-record basis. When you finish with it, you must explicitly disconnect . By contrast, the DataSet object is an in-memory, disconnected collection of tables. It is populated care of the DataSetCommand class. The content of a DataSet object is based on the XML stream the DataSetCommand class got back from the data source.
I'll be covering DataReader and DataSet classes in upcoming columns.
The data goes from the provider to the consumer in binary format, and through COM marshaling if you employ OLE DB. In .NET, instead, the managed provider returns an XML stream.
Both breeds of providers support a query language, which is normally SQL with vendor-specific proprietary extensions. Through this language, you perform updates and interrogate the data source.
So what's the difference between OLE DB and .NET data providers? Speaking abstractly, they share the same vision of data access. But managed providers are much simpler and specialized. They result in a better performance for two main reasons. First off, managed providers aren't supposed to use the COM Interop bridge to get and set data. Being COM components, OLE DB providers have no choice on this point. Furthermore, managed providers normally leverage the vendor's knowledge of the data source internals to get and set rows much faster. This is exactly also what OLE DB providers do, but when used within .NET OLE DB providers pay the price of their COM-based nature and need extra code to translate data into .NET-specific classes.
As of Beta 1, the .NET Framework features two managed providers: one for SQL Server (version 7.0 and higher) and one for all the data sources you can reach through an OLE DB provider.
The SQL Server managed provider is hidden behind specific classes like SQLDataReader, SQLDataSetCommand, and SQLCommand. Those classes utilize direct access to the low-level SQL Server file system. The picture below shows the provider's class diagram mapping the previous general schema to the SQL Server's managed provider.
Figure 2. Class diagram for the SQL Server managed provider
The managed provider for OLE DB plays the same role in .NET as the OLE DB provider for ODBC in Windows DNA systems. Basically, it represents the backward compatibility and the living proof that any .NET application can target any existing OLE DB-powered data source. The class diagram for the OLE DB managed provider is shown below.
Figure 3. Class diagram for the OLE DB managed provider
Notice that ADOxxx classes are expected to be renamed to OleDbxxx with Beta 2.
The OLE DB managed provider exposes .NET classes to the callers, but leverages the specified OLE DB provider to fetch rows. The communication between the .NET application and the underlying OLE DB provider (a COM object) takes place over the COM Interop bridge.
In general, in .NET you can access SQL Server 7.0 (and higher) tables through both providers. The managed provider of SQL Server goes straight to the DBMS file system to ask for data. The OLE DB managed provider, instead, relies on the services of the SQLOLEDB OLE DB provider, resulting in an extra layer of code to be traversed.
Today, if you target any data source other than SQL Server, the OLE DB managed provider is the only way you have to go. Through this same channel, you can also reach any ODBC data source.
The managed provider for OLE DB is a thin wrapper built on top of the COM Interop bridge calling into the native OLE DB provider. Aside from setting up and terminating the call, such a module also takes care of packing the returned rowset into a DataSet or an ADODataReader object for further .NET processing.
At the level of .NET code, accessing a SQL Server table through the native managed provider, or through the OLE DB provider, is essentially a matter of changing the prefix of the involved classes. Here's the code for SQL Server:
Dim strConn, strCmd As String
strConn = "DATABASE=Northwind;SERVER=localhost;Integrated Security=SSPI;"
strCmd = "SELECT * FROM Employees"
Dim oCMD As New SQLDataSetCommand(strCmd, strConn)
Dim oDS As New DataSet
oCMD.FillDataSet(oDS, "EmployeesList")
And here's the code for the OLE DB provider (differences in bold):
Dim strConn, strCmd As String
strConn = "Provider=SQLOLEDB;"
strConn += "DATABASE=Northwind;SERVER=localhost;Integrated Security=SSPI;"
strCmd = "SELECT * FROM Employees"
Dim oCMD As New ADODataSetCommand(strCmd, strConn)
Dim oDS As New DataSet
oCMD.FillDataSet(oDS, "EmployeesList")
As you can see, on the surface the differences are very minimal; just the connection string and the command class. Employing one class or the other, instead, makes a big difference.
.NET managed providers represent the next step in the evolution of data access technologies but, as of Beta 1, there's no documented SDK to write data-source specific managed providers. Waiting for Beta 2, a few basic questions about OLE DB and .NET cannot be skipped.
Is all the code developed for OLE DB just legacy code? What will come of all the effort that companies put (and often are still putting) into writing providers for their own data?
Have faith—OLE DB is not a dead technology. Period. It still remains a fundamental specification for a feature-rich and general-purpose, .NET-independent programming interface. It is not specific to .NET, but it is well supported.
That said, if you have custom data to expose, you cannot ignore the advent of .NET and managed providers. What then is the best interface to dress up your data providers? How should you plan to expose your data right now, for example, starting next Monday morning at 8 A.M.?
.NET utilizes open standards and is extensively based on XML. Given this, if you have proprietary, yet text-based data to expose, you can simply consider publishing it using XML, perhaps using a custom schema. There are so many facilities in .NET to work with XML data that coming up with a wrapper class shouldn't be an issue at all.
For more complex data stores, OLE DB providers still make sense because you need to reach a much larger audience that may not be bound to .NET. For .NET specific applications, a managed provider can certainly give substantial performance advantages, but I would be very careful in this case as well—especially this Monday! Don't forget, no SDK for managed providers has been released yet, but Microsoft is committed to delivering one.
So summarizing, the next data provider I'm supposed to start writing this Monday morning will consist of a pair OLE DB providers plus a .NET wrapper class talking in XML. My first option wouldn't be to have the .NET class wrapped around the OLE DB provider through COM Interop. I'd rather employ the same, somehow adapted, source code. In this case, Managed C++ is probably the best language to use to facilitate the "physical" code reuse.
Take this as a sort of a forecast to be verified a couple of years from now. In the long run, I hazard to say that OLE DB will come to the same, relatively bad, end of SGML—the Standard Generalized Markup Language, forerunner of XML.
Introduced as the savior of the world of data exchange, SGML never became a de-facto standard, probably because it was too powerful and complex for everyday use. Fact is, its inspiring principles have been widely accepted only after being properly narrowed and specialized to originate XML.
My forecast is that as soon as .NET takes root, OLE DB will progressively lose importance until it disappears. I can't say how long this process will actually take, but I would bet on it.
Quotes available soon <g>. Stay tuned.
You probably live in another time dimension! How can you define legacy code as existing ADO code, in most cases written six months ago or later? How can you do that in the name of a technology/platform, .NET, that is not even at the second stage of its beta program?
What would life be without a bit of emphasis? Good point, anyway.
Can we really define legacy code as all the sharp rocks of ADO code found in many recent DNA systems? My answer is still "yes, we should." But I do understand that it definitely sounds baffling.
I consider "legacy code" the code that is no longer aligned with the core of the hosting platform. Believe me, this is exactly what is going to happen with .NET. Of course, there will be ways to integrate existing code, components and, applications in .NET.
.NET is a non-violent revolution that in the next few years will absorb any living instance of software in Windows. Resistance is futile—you'll be assimilated. Regardless of the age of the code, in my definition of what's legacy it's the alignment between sources and runtime that really matters.
.NET changes the Windows runtime making it managed. While COM and the Windows SDK are not dead, you have to write code according to another model. Regardless of the underpinning of this new runtime, there's a brand new model to cope with. And this model will be the future model of Windows.
Windows is not dead, but it'll change. COM is not dead, but it'll have the face of .NET classes. ADO is not dead and it still works, but the .NET features of ADO.NET are the future of ADO.
.NET is not simply Windows 6.0 and ADO.NET is not a fancy new name for what could have been called ADO 3.0. It's different and pervasive. It's a new platform. All the rest is either another platform or, when integrated, is legacy code.
Legacy code has no age. I understand that people will be writing DNA systems this week, six months from now, and even when .NET actually ships. I'm not saying this is necessarily wrong or should be absolutely avoided. Simply be aware that you're swimming against the stream.. Dino is the author of the upcoming Building Web Solutions with ASP.NET and ADO.NET from Microsoft Press, and the cofounder of. You can reach Dino at dinoe@wintellect.com. | http://msdn.microsoft.com/en-us/library/ms810287.aspx | crawl-002 | refinedweb | 3,507 | 64.91 |
Can you try this new code
#include <SPI.h>
#include <NativeEthernet.h>
#define USING_STATIC_IP true
Type: Posts; User: khoih-prog,...
How To Install Using Arduino Library Manager
New in v1.5.0
1. The original Arduino WiFiNINA library only supports very limited boards, such as:...
Updated: Mar 22nd 2020
New in v1.0.1
1. New powerful-yet-simple-to-use feature to enable adding dynamic custom parameters from sketch and input using the same Config Portal. Config Portal will...
Hope you are doing OK now.
Just had some mods in the code so that the Config Portal AP SSID is using unique macAddress/ID of Teensy board now, for example ESP_AT_E50AB22C
Start...
Just read the schematic PDF of the board, and ESP is connected to Serial1 (TX1/RX1) of T4.
Arduino-Teensy4_PLUS_ESP-12E_r1-SCH.pdf
To test, you can change line 31 of the example code from
...
Currently, for T4, the example uses Serial2 of T4, and speed is 115KBauds
#define EspSerial Serial2 //Serial2, Pin RX2 : 7, TX2 : 8
...
// Your Teensy <-> ESP8266 baud rate:
#define...
If you install using Arduino IDE Library Manager, it might have asked you to install the dependency.
Otherwise, you need to manually install Functional-Vlpp
Hi,
Thanks for using the library.
This library depends on ESP8266_AT_WebServer Library.
Please install ESP8266_AT_WebServer Library then recompile.
The issue now is that the library is...
Better Config Portal GUI
The Config Portal screens:
1.Main Screen
19369
2. Input Credentials:
How To Install Using Arduino Library Manager
This library is a Light Weight Credentials / WiFi Manager for ESP8266 AT shields, specially designed to...
How To Install Using Arduino Library Manager
This library is a Credentials / WiFi Connection Manager with fallback web configuration portal...
Update Feb 20th 2020
Releases v1.0.6
***New in this version***
1. Add support for ENC28J60 Ethernet shields and other boards such as SAMD, etc,
2. Add checksum
3. Add more examples for...
Update Feb 20th 2020
Version v1.0.2
1. From v1.0.2+, the library supports many more Arduino boards (Atmel AVR-s, Atmel SAM3X8E ARM Cortex-M3, STM32F series, ESP8266, Intel ARC32(Genuino101),... | https://forum.pjrc.com/search.php?s=546256e88d6bde922091804efd97652e&searchid=6378163 | CC-MAIN-2021-25 | refinedweb | 353 | 60.51 |
Niranjan: the prefix mh is declared but not used, neither in the schema nor in the XML instance.
<element name="address" type="mh:USAddress" />
Niranjan: Now the addr is seen as a newly added prefix in the XML instance. Where did it come from suddenly?
Niranjan:What is the purpose of this line in the schema: <element name="address" type="mh:USAddress" />
Originally posted by Nitesh Kant: <>
So if you have an element without a prefix with a parent that declares a default namespace that is different form its own - what namespace does that element belong to?
4.1.13 SOAP Body and Namespaces The use of unqualified element names may cause naming conflicts, therefore qualified names must be used for the children of soap:Body. R1014 The children of the soap:Body element in a MESSAGE MUST be namespace qualified. | http://www.coderanch.com/t/149067/java-Web-Services-SCDJWS/certification/Confused-XML-Schema | CC-MAIN-2015-48 | refinedweb | 141 | 57.91 |
xpro wrote:First of all, you have to be very clear about what the rules are:
I'm trying to find a way to write strings which selects one of the words inside the brackets, so when the string is initialized "I would like to meet you" would go into s and another time "I would like to see you" would go into s. What is the best way to achieve this? how can I write it in a good clean way?
xpro wrote:Apart from what's been suggested you could introduce a "static" class, like
how can I write it in a good clean way?
You would still have to write code that analyses the template String in randomize.You would still have to write code that analyses the template String in randomize.
public class MyStrUtils { private MyStrUtils() {} // never instantiate this class private static Random rnd = new Random(); // source of randomness public static String randomize(String template) { // interprets the template String and returns a converted String } } // String s = MyStrUtils.randomize("I would like to (see|meet) you");
Hope this helps.Hope this helps.
String[] myValues = new String[2]; myValues[0] = "meet"; myValues[1] = "see"; String s = String.format ("I would like to %s you", myValues[0]); | https://community.oracle.com/message/8614969?tstart=0 | CC-MAIN-2014-15 | refinedweb | 208 | 70.63 |
As the title suggests, I would like to get python 3.5 to search my root ('C:\')
for pdf files and then move those files to a specific folder.
This task can easily split into 2:
1. Search my root for files with the pdf extension.
2. Move those to a specific folder.
Now. I know how to search for a specific file name, but not plural files that has a specific extension.
import os
print('Welcome to the Walker Module.')
print('find(name, path) or find_all(name, path)')
def find(name, path):
for root, dirs, files in os.walk(path):
print('Searching for files...')
if name in files:
return os.path.join(root, name)
def find_all(name, path):
result = []
for root, dirs, files in os.walk(path):
print('Searching for files...')
if name in files:
result.append(os.path.join(root, name))
return result
Your find_all function is very close to the final result. When you loop through the files, you can check their extension with os.path.splitext, and if they are .pdf files you can move them with shutil.move
Here's an example that walks the tree of a source directory, checks the extension of every file and, in case of match, moves the files to a destination directory:
import os import shutil def move_all_ext(extension, source_root, dest_dir): # Recursively walk source_root for (dirpath, dirnames, filenames) in os.walk(source_root): # Loop through the files in current dirpath for filename in filenames: # Check file extension if os.path.splitext(filename)[-1] == extension: # Move file shutil.move(os.path.join(dirpath, filename), os.path.join(dest_dir, filename)) # Move all pdf files from C:\ to G:\Books move_all_ext(".pdf", "C:\\", "G:\\Books") | https://codedump.io/share/erulAIOwuNk2/1/how-to-search-the-entire-hdd-for-all-pdf-files | CC-MAIN-2016-50 | refinedweb | 282 | 77.84 |
better add an alarm bell to wake up the user when all that was done! Thinking that there had to be a better way, I dug out my books on regular expressions and started wading through them again. An overwhelming sense of despair hit me faster than you can say regular expressions simple. By the time you are done, you will be able to write simple validators, and you will know enough about regular expressions to dig into it further without slitting your wrists.
The project that accompanies this article contains a little application called Regex Tester. I have also included an MSI file to install the binary, for those who do not want to compile the application themselves.
Regex Tester is so simple as to be trivial. In fact, it took longer to create the icon than it took to code the app! It has a single window:
The Regex text box contains a regular expression (a ‘regex’). The one shown here matches a string with an unsigned integer value, with an empty field allowed. The Regex box is green because the regex is a valid regular expression. If it was invalid, the box would be red (okay, pink) like the Test box.
And speaking of the Test box, it’s red because its contents don’t match the pattern in the Regex box. If we entered a digit (or digits) in the Test box, it would be green, like the Regex box.
We’ll use Regex Tester shortly. But first, let’s give some thought to the mechanics of validating user input.
There are three ways to validate user input:
I recommend against relying on submission validation in a Windows Forms application. Errors should be trapped sooner, as close to when they are entered as possible. Nothing is more annoying to a user than a dialog box with a laundry list of errors, and a form with as many red marks as a badly-written school paper is not far behind. The rule on validation is the same as the rule on voting in my home town of Chicago: “Validate early, and validate often.”
Windows provides a Validating event that can be used to provide completion validation for most controls. An event handler for the Validating event typically looks something like this:
Validating
private void textBox1_Validating(object sender, CancelEventArgs e)
{
Regex regex = new Regex("^[0-9]*$");
if (regex.IsMatch(textBox1.Text))
{
errorProvider1.SetError(textBox1, String.Empty);
}
else
{
errorProvider1.SetError(textBox1,
"Only numbers may be entered here");
}
}
The first line of the method creates a new regular expression, using the Regex class from the System.Text.RegularExpressions namespace. The regex pattern matches any integer value, with empty strings allowed.
Regex
System.Text.RegularExpressions
The second line tests for a match. The rest of the method sets the error provider if the match failed, or clears it if the match succeeded. Note that to clear the provider, we set it with an empty string.
To try out the code, create a new Windows Forms project, and place a text box and an error provider on Form1. Add a button to the form, just to give a second control to tab to.
Form1
Create an event handler for the text box Validating event, and paste the above code in the event handler. Now, type an ‘a’ in the text box and tab out of it. An error ‘glyph’ should appear to the right of the text box, like this:
If you move the mouse over the glyph, a tool tip should appear with the error message. Much more elegant, and far less jarring, than a message box!
But we can actually do better than that. Add an event handler for the text box TextChanged event, and paste the code from the Validating event into it. Now, run the program, and type an
The TextChanged event provides us with a means for performing key-press validation. This type of validation provides the user with immediate feedback when an invalid character is typed. It can be paired with completion validation to provide a complete check of data entered by the user. We will use several examples of key-press-completion validation later in this article.
I generally recommend using key-press validators to ensure that no invalid characters are entered into a field. Pair the key-press validator with a completion validator to verify that the contents of the control match any pattern that may be required. We will see several examples of this type of validation, using floating point numbers, dates, and telephone numbers, later in the article.
Submission validation will still be required for some purposes, such as ensuring that required fields are not left blank. But as a rule, it should be relied on as little as possible.
Now that we have seen how to validate, let$’. These two symbols mark the beginning and end of a line, respectively. Enter this into the Regex box of Regex Tester:
^$
The Regex box turns green, indicating that the regex is valid. The Test box is also green, indicating that an empty string is allowed. Now type something, anything, in the Test box. It doesn” reading of the regex.
To verify, type in any number of digits in the Test box. It stays green. But type in any other character, and the box turns red. What we have just done is create a regex that can validate a text box as containing an integer.
Clear the boxes, and enter the same regex again. Now, replace the asterisk with a plus sign. Now the regex reads: “Match any string that consists of a sequence of one or more digits”. Empty strings are now disallowed.
When you change the asterisk to a plus sign, the empty Test box should turn red, since empties are no longer allowed. Otherwise, the box behaves just as it did before. The only difference between the asterisk and the plus sign is that the asterisk allows empties, and the plus sign doesn’t. As we will see later, these symbols can be applied to an entire regex, or to part of a regex. We will see how, when we look at grouping.
The validators we worked with in the last section will validate integers, but not real numbers. Use either of the preceding regexes, and enter the following number:.45’ into the Test box. This time, the box should be red until you enter the decimal point. Then it turns green, until you enter the 4, when it turns red again. Clearly, we have some more work to do.
The Test box turned red until you hit the decimal point because it is a required character. No matter how many numbers you enter, they must be followed by a decimal point. That would work fine for a completion validator, but it won’t work for a key-press validator. In that case, we want to allow the period, but not require it.
The solution is to make the period optional. We do that by putting a question mark after the period, like this:
^[0-9]+\.?$
Try it in Regex Tester, entering 123.45 once again. The Test box will stay green until you type the ‘4’, at which point it will turn red again. We have taken care of the decimal-point problem, but we still need to do something about the numbers that follow it.
Our problem is that the decimal point is the last item specified in the pattern. That means, nothing can come after it. But we want more numbers to come after it! Then we should add them to our pattern, like this:
^[0-9]+\.?[0-9]*$
Try that pattern in the Regex box, and type ‘123.45’ in the Test box. The box should be red when it is empty, but green when an integer or a real number is typed into it. Note that for the decimal portion of the number, we used an asterisk, instead of a plus sign. That means, the decimal portion can have zero elements—we have, in effect, made the decimal portion of the number optional, as well.
There is only one problem that remains. Let’s assume we have a text box that we need to validate for a real number, and that an empty is not allowed for this text box. Our current validators will do the job for a completion validator, but not for a key-press validator. Why not?
Try this: select after the number in the Test box, then backspace until you remove the last digit. When you do, the box turns red. That means, if the user starts entering a number, then deletes it by backspacing, a glyph is going to pop up. To the user, that’s a bug. What it means for us is that we need slightly different regexes for the two validators. The key-press regex should disallow empties, but the completion regex needs to allow them.
To do that, change the plus sign in the regex to an asterisk, so that the regex looks like this:
^[0-9]*\.?[0-9]*$
Now, the Test box is green, even when it is empty. Use this modified regex in the key-press validator. This approach gives us the best of both worlds; the user gets immediate feedback while entering the number, and empties are flagged as errors without getting in the user’s way.
The preceding examples point out the need for validator pairs. In many cases, we need a key-press validator to ensure that no invalid characters are entered, and a separate completion validator to ensure that the field matches whatever pattern is required.
By now, you should be getting the hang of simple validators. Regular expressions are as powerful as you want them to be—they constitute a language all their own. But as you have seen, you can get started with just a few elements. Here are some other handy elements of regular expression syntax:
Groups: Parentheses are used to group items. Here is a regex for the comma-delimited list of integers I mentioned earlier:
^([0-9]*,?)*$
Here is how to read allowed by the pattern.
We can modify the pattern to allow spaces, by using a feature known as alternates. Alternates are specified by using the pipes (|) character within a group. Here is how the last regex looks when we allow a comma-space combination as an alternative to a simple comma:
^([0-9]*(,|, )?)*$
Note that there is a space after the second comma in the regex. The group with the commas now reads: “…followed by a comma, or a comma and a space…”
Now, type in a comma-delimited list of integers with a space after each comma. The Test box stays green, so long as you enter only a single space after each comma.
But suppose you want to let the user type an unlimited number of spaces after each comma? Here is an opportunity to test yourself. See if you can modify the last regex to allow unlimited spaces. Don’t peek at the answer until you have given it a try.
The simple answer is to add an asterisk after the space in the alternate group:
^([0-9]*(,|, *)?)*$
That regex will work, but we can tweak it a bit to make it more elegant. The comma-space-asterisk alternate means: “Followed by a comma, followed by zero or more spaces.” But that makes the first alternate redundant. And that means, we can delete the first alternate entirely, which brings us to the final answer:
^([0-9]*(, *)?)*$
Again, note the space after the comma. This solution is both shorter and easier to understand than our original solution.
Hopefully, this exercise gives you a feel for the process of developing a regular expression. They are not’.
Copy that regex to Regex Tester and give it a try. Note that the validation fails if the user enters dashes, instead of slashes, between the parts of the date. How could we increase the flexibility of our regex to accommodate dashes? Think about the question for a minute before moving on.
All we need to do is add a dash to the alternates group:
^([0-9]|/|-)*$
We could add other alternates to make the regex as flexible as it needs to be.
The completion validator does a final check to determine whether the input matches a complete date pattern:
^[0-2]?[1-9](/|-)[0-3]?[0-9](/|-)[1-2][0-9][0-9][0-9]$
The regex reads as follows: "Match any string that conforms to this pattern: The first character can be a 0, 1, or 2, and it may be omitted. The second character can be any number and is required. The next character can be a slash or a dash, and is required…”]’ allows upper or lower-case letters.
Our date regex also points out some of the limitations of regex validation. Paste the date regex shown into Regex Tester and try out some dates. The regex does a pretty good job with run-of-the-mill dates, but it allows some patently invalid ones, such as ‘29/29/2006’, or ‘12/39/2006'. The regex is clearly not ‘bulletproof’.
We could beef up the regular expression with additional features to catch these invalid dates, but it may be simpler to simply use a bit of .NET in the completion validator:
bool isValid = DateTime.TryParse(dateString, out dummy);
We gain the additional benefit that .NET will check the date for leap year validity, and so on. As always, the choice comes down to: What is simpler? What is faster? What is more easily understood? In my shop, we use a regex for the key-press validator, and DateTime.TryParse() for the completion validator.
DateTime.TryParse()
Telephone numbers: Telephone numbers are similar to dates, in that they follow a fixed pattern. Telephone numbers in the US follow the pattern (nnn) nnn-nnnn, where n equals any digit. But creating a regex for a telephone number presents a new problem: How do we include parentheses in our pattern, when parentheses are special characters that specify the start and end of a group?
Another way of stating the problem is: We want to use parentheses as literals, and not as special characters. To do that, we simply escape them by adding a backslash in front of them. Any special character (including the backslash itself) can be escaped in this manner.
Backslash characters are also used for shortcuts in regular expressions. For example,.”
The key-press validator will ensure that invalid characters are not entered, but it will not perform a full pattern matching. For that, we need a separate completion validator:
^\(\d\d\d\) \d\d\d-\d\d\d\d$
This validator specifies the positions of the parentheses and the dash, and the position of each digit. In other words, it verifies not only that all characters are valid, but that they match the pattern of a U.S. telephone number.
Neither one of these regular expressions is bulletproof, but they will give you a good starting point for developing your own regular expressions. If you come up with a good one, why not post it as a comment to this article?
We have barely scratched the surface of regular expressions, but we have accomplished what we set out to do. At the beginning of this article, I promised that you would be able to write simple validators, and that you would know enough to dig further into the subject. At this point, you should be able to do both.
There are thousands of regular expression resources on the Web. I would particularly recommend the following two web sites:
Also worthy of note is The 30 Minute Regex Tutorial, an article on CodeProject. It includes a nice regex utility called Expresso. The reason I didn to me!
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
public ContactInfo()
{
InitializeComponent();
txtFirstName.Tag = new RegexValidator("^[A-Z][a-z]{0-19}$", "First name is required. (Title case formatted)");
txtMiddleInitial.Tag = new RegexValidator("^[A-Z]?$", "Middle initial must be uppercase");
txtLastName.Tag = new RegexValidator("^[A-Z][a-z]{0-19}$", "Last name is required. (Title case formatted)");
txtAddress.Tag = new RegexValidator(@"(?=.*\w)", "Must enter at least one non-whitespace character");
txtCity.Tag = new RegexValidator(@"(?=.*\w)", "Must enter at least one non-whitespace character");
txtZip.Tag = new RegexValidator(@"^\d{5}(-\d{4})?$", "Must be of the form XXXXX or XXXXX-XXXX");
}
private void allTextBox_Validating(object sender, CancelEventArgs e)
{
Control validControl = (Control)sender;
//RegexValidator regExVal = new RegexValidator();
//regExVal = (RegexValidator)validControl.Tag;
RegexValidator regExVal = validControl.Tag as RegexValidator;
if (regExVal != null)
{
if (!regExVal.Validate(validControl.Text))
{
e.Cancel = true;
errorProvider1.SetError(validControl, regExVal.ErrorMessage);
}
}
}
</pre?>
<pre>
public bool Validate(string validatedStr)
{
string trimmedStr = validatedStr.Trim();
//for debugging purposes
bool result = Regex.IsMatch(trimmedStr, regExpPattern);
string resultStr = string.Format("The result of IsMatch() is: {0}", result);
MessageBox.Show(resultStr, "Debugging");
return Regex.IsMatch(trimmedStr, regExpPattern);
}
private void textBox1_Validating(object sender, CancelEventArgs e)<br />
{<br />
double parsedValue;<br />
bool success = double.TryParse(textBox1.Text, out parsedValue);<br />
if (success)<br />
{<br />
parsedValue = -(Math.Abs(parsedValue));<br />
textBox1.Text = parsedValue.ToString("N0");<br />
errorProvider1.SetError(textBox1, String.Empty);<br />
}<br />
else<br />
{<br />
errorProvider1.SetError(textBox1, "Not a number");<br />
}<br />
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/13255/Validation-with-Regular-Expressions-Made-Simple?msg=2785094 | CC-MAIN-2017-43 | refinedweb | 2,954 | 65.62 |
macOS • Ubuntu • Amazon Linux
WorkspaceWorkspace
Workspace automates management of Swift projects.
Πᾶν ὅ τι.
Whatever you do, work from the heart, as working for the Lord and not for men.
―שאול/Shaʼul
FeaturesFeatures
- Provides rigorous validation:
- Generates API documentation.
- Automates code maintenance:
- Automates open source details:
- Designed to interoperate with the Swift Package Manager.
- Manages projects for macOS, Windows, web, CentOS, Ubuntu, tvOS, iOS, Android, Amazon Linux and watchOS.
- Configurable
The Workspace WorkflowThe Workspace Workflow
(The following demonstration package is a real repository. You can use it to follow along.)
When the Repository Is ClonedWhen the Repository Is Cloned
The need to hunt down workflow tools can deter contributors. On the other hand, including them in the repository causes a lot of clutter. To reduce both, when a project using Workspace is pulled, pushed, or cloned...
git clone
...only one small piece of Workspace comes with it: A short script called
Refresh that has several platform variants.
Hmm... I wish I had more tools at my disposal... Hey! What if I...
Refresh the ProjectRefresh the Project
To refresh the project, double‐click the
Refresh script for your platform. (You can also execute the script from the command line if your system is set up not to execute scripts when they are double‐clicked.)
Refresh opens a terminal window, and in it Workspace reports its actions while it sets the project folder up for development. (This may take a while the first time, but subsequent runs are faster.)
This looks better. Let’s get coding!
[Add this... Remove that... Change something over here...]
...All done. I wonder if I broke anything while I was working? Hey! It looks like I can...
Validate ChangesValidate Changes
When the project seems ready for a push, merge, or pull request, validate the current state of the project by double‐clicking the
Validate script.
Validate opens a terminal window and in it Workspace runs the project through a series of checks.
When it finishes, it prints a summary of which tests passed and which tests failed.
Oops! I never realized that would happen...
SummarySummary
Refreshbefore working.
Validatewhen it looks complete.
Wow! That was so much easier than doing it all manually!
AdvancedAdvanced
While the above workflow is the simplest to learn, Workspace can also be installed as a command line tool that can be used in a wider variety of ways. Most notably, any individual task can be executed in isolation, which can speed things up considerably for users who become familiar with it.
Applying Workspace to a ProjectApplying Workspace to a Project
To apply Workspace to a project, run the following command in the root of the project’s repository. (This requires a full install.)
$ workspace refresh
By default, Workspace refrains from tasks which would involve modifying project files. Such tasks must be activated with a configuration file.
optIntoAllTasks() can be used in the configuration file to easily activate everything at once, no matter how much it might end up overwriting.
InstallationInstallation
Workspace provides command line tools.
They can be installed any way Swift packages can be installed. The most direct method is pasting the following into a terminal, which will either install or update them:
curl -sL | bash -s Workspace "" 0.40.5 "workspace help" workspace arbeitsbereich
ImportingImporting
Workspace provides a library for use with the Swift Package Manager.
Simply add Workspace as a dependency in
Package.swift:
let package = Package( name: "MyPackage", dependencies: [ .package( name: "Workspace", url: "", .upToNextMinor(from: Version(0, 40, 5)) ), ], targets: [ .target( name: "MyTarget", dependencies: [ .product(name: "WorkspaceConfiguration", package: "Workspace"), ] ) ] )
The module can then be imported in source files:
import WorkspaceConfiguration
AboutAbout
The Workspace project is maintained by Jeremy David Giesbrecht.
If Workspace saves you money, consider giving some of it as a donation.
If Workspace | https://swiftpackageregistry.com/SDGGiesbrecht/Workspace | CC-MAIN-2022-40 | refinedweb | 626 | 66.94 |
Lord Matt
Ranked #6,598 in People, #109,200 overall
All about Lord Matt
Also he wrote this lens.
Just for you strange people here is the Lord Matt profile on squidoo. Enjoy.
Lord Matt Super Geek
Lord Matt is your super geek. If you need geeking done fast then this is the blogger to call on.
Lord Matt - the blog
What has Lord Matt been saying
Live from the Lord Matt feed. This is a summary of the last few posts at the lord matt blog. I try to have something worthwhile to say every day and often try to ensure that my content has orginality and research. I cover a lot of topics some of the more specific ones are linked in the link list.
A gathering of Matt fans
Lord Matt - Popular with students
Matt is very popular with young learners of all ages
Hopefully in early 2010 a new batch of limited edition posters will be made available for student bedroom walls everywhere.
I bet you can not wait.
What are other people saying about Lord Matt?
Lord Matt, Super geek - a popular man.
- The things in your garden are, in fact, still mine
- I encountered one of the craziest things yet while doing the big gardening project. Copyrighted f...
- From 84 to 222 by feed - so have I made it then?
- It appears that I have finally gotten past the 100 readers mark. Or have I? Maybe I had always...
- What is it with socks?
- What is it about socks that makes them the most irritating thing on the planet? In about an ho...
@lordmatt
Lord Matt on twitter
Lord Matt's Tweets
- lordmatt
- aka Lord Matt
- 815 followers
- 710 following
-
- Games: Why Video Games Are Having a Harder Time With Humor
-
- Hey, just did a Plunder in Pirates, can you send me an Energy Boost? #zyngapirates #fb
-
- Games: Bethesda Releases Daggerfall For Free
-
- Oh no! The twitter fail pig! #SwineFlu
-
- Petted my Pet in Pirates, can you pet my Pet? #zyngapirates #fb
The IntFeLoMa
International Festival of Lord Matt
This is an image of one of the early IntFeLoMa 2009 banners being put up.
You can read more about IntFeLoMa on Stumble Upon, Twitter and the main blogs of the world. You know they are the main blogs because they talk about IntFeLoMa.
IntFeLoMa Advertising in Action
Latest IntFeLoMa from Google
IntFeLoMa it's coming to a world near you
- Chris Pirillo stoked with Gold VIP IntFeLoMa 2009 Ticket
- In a recent screen cast well known geek Chris Pirillo said that he was Stoked to be granted Gold ...
- IntFeLoMa News
- No more! from now on IntFeLoMa will bring you 3 whole days of fun fun fun. Make Lord Matt your religion today and not only can you call your self a Lordlette but you get to claim IntFeLoMa secondaries as required religious holidays. ...
- The 2009 International Festival of Lord Matt
- I feel I should squash rumours early about the appearance or lack thereof of Steve Jobs at the 2009 IntFeLoMa. Sadly Steve Jobs will not be joining us this year as he is taking some time off to recover his health. ...
IntFeLoMa Twitter Storm
Talking about IntFeLoMa
So who has their IntFeLoMa tickets then?
I say...
What others are saying...
More...
Matt - the sexiest man on earth
Lord Matt's Featured Lenses
You know Lord Matt bakes great lensesSome lenses created by Lord Matt. Go give him some extra vote-y love.
Local Government Ombudsman
The Local Government Ombudsman or LGO was supposed to protect the people from maladministration by councils and other local authorities. However, it is toothless and has no authority over councils. One blog describes the LGO thus "[...] you're putti...
How to Make Money: The Golden Cycle
I got started with nothing more than a few big ideas. Today I own my own company - the secrets I now share will enable anyone to not only do what I have done but probably do better too. I don't feel the need to sell this information but I give it awa...
godaddy, nodaddy, slowdaddy - some facts you might need
Go Daddy is an Internet domain registrar and web hosting company, which also sells e-business related software and services. Go Daddy was founded in 1997 by Bob Parsons Go Daddy has gathered a large number of critics and enemies on line and off. It i...
Lord Matt at Del.icio.us!
Want to know what I've been bookmarking? While have a look becasue now you can find out.
Other geeks worship Lord Matt
Links to Lord Matt things.
- The fantastic site of Lord Matt
- The Fantastic site of Lord Matt is like nothing else you have ever see before. It is a wacky blend of the comic and the profound, IT and ecology, News and Views; and Running a Business while managing a family. All this plus lots more (or is that less).
- Imaginary Hyper-Space
- Such an interesting part of the Lord Matt expirence that it got it's own lens.
- Hoaxes, Scams and BAD Behaviour: Get Wise.
- The first "usefull" information lens I made.
- php 101
- Lord Matt's introduction to php - articles for new php users.
- Lord Matt the Geek
- Lots of pure geekery.
- lord Matt on news and views
- A lively topic - I post here most often. I have lot's of views, especially on news and I don't mind expressing them.
- Lord Matt's Wiki
- Yes I have my own wiki... it's not all I have.
- Lord Matt's Forum
- I have my own forum - it's a little chatty back water - drop in sometime.
- NucleusCMS Plugin Review
- This special page within the Lord Matt wiki is dedicated to providing some third party views of plugins for NucleusCMS.
- Lord Matt's Hosting Offer
- Lord Matt can also sell you a dmain name, a server or some blog hosting.
- Orbit42 Base Class 1.3
- The base class of Orbit42.com (coded by Lord Matt) is an opensource (free) project released with the hope that others would benifit from a class that contians the basics already. The Object Oriented project is hosted by sourceforge.net. Such basics include iterative functions (allowing each parent in the chain to contribute without the need for complex namespace), debugging and error tracking system (from 1.1) and a callback manager (allows easy plugin structure). The aim is to create a system that makes the good use of the best design patterns an easier thing to create. It also supports simple abstraction to datalayer (this is MySQL by default). There is now a second version in development with better abstraction.
- Wikablog listing
- Just a basic listing - feel free to add extra bit's to it for me.
- Lord Matt's spamhuntress wiki profile
- The spamhuntress wiki is a wiki (funny that) for hunting down spammers.
- Lord Matt on My Space
- Yes, I have a myspace profile.
- Matthew Brown, Managing Director and Owner: Adullam Limited. That's Lord Matt in real life.
- Digital Point Forums - Profile: Lord Matt
- Search Engine Optimization and Marketing forum.
- MyBlogLog Profile
- MyBlogLog is very cool - if you have yet to sign up there you should do so.
- freshmeat.net
- User info page for Lord Matt
- sourceforge.net
- Developer profile for Matt at sourceforge.net
- wikipedia.org user page
- Yep, Lord Matt is at the wikipedia too. Get's everywhere this guy.
- Geeks Group
- This is a squidoo item - a group for geekish slef mad profiles.
What you think about what I think
...because you might be geek too.
Lord Matt and You
A new project from Lord Matt in connection with Ma more...1 point
A new project from Lord Matt in connection with Mad-Den.net - still undergoing final Alpha tests. Ssh - it's a secret. IE users may need to upgrade...1 point
Mad-Den Index :: Home of the Homepage Search
A directory with a difference. IE users may have more...1 point
A directory with a difference. IE users may have difficulty.1 point
GloballyLocal.Com
Insiders guide to hosting and domain names.1 point
The fantastic site and imaginary hyperspace of Lord Matt
The fantastic site and imaginary hyperspace of Lor more...1 point
The fantastic site and imaginary hyperspace of Lord Matt ruler of vast tracts of imaginary hyper-space. Know one can be told who Lord Matt is - you mu...1 point
Lord Matt's Girth Watch
Lord Matt on loosing the extra pounds...1 point
36 Quality Links
Getting and keeping 36 Great Links.0 points
A picture is sexier with Lord Matt in it.
Lord Matt Stuff on CafePress
Yes Lord Matt has a shop too.
How was your beverage? Large Mug.
On the scale of things that rule how did you cup rate? Was it minus 3 or 42 out of 10?
Lord Matt Magnet
Get a magnet with Lord Matt's face on it.
Lord Matt Button
Show your support for Lord Matt buy proudly showing this badge off.
Lord Matt Super Geek
Lord Matt's Geek Teddy Bear
Lord Matt says "wear my face". Support you fav' blogger by buying stuff.
A poll for you to play with
Here is a situation. You've gotten lost in Lord Matt's Imaginary Hyper-Space and your only hope for survival isto spend a year drifting in a life pod with one Imaginary hyper-character.
Fantastic Comments
You feedback, shout outs and all that
Come on, don't be shy. Everyone loves a cheerfull comment.
Margaret_Schaut wrote...
Welcome to the Squidoo Web companions group!
by LordMatt
Related Topics
LordMatt Recommends...
- Imaginary Hyper-Space
- Random People on a Strange Ship
- How to Make Money: The Golden Cycle
- Lord Matt's Super Geekery
- Pip Penguin | http://www.squidoo.com/lordmatt | crawl-002 | refinedweb | 1,626 | 75 |
.NET.
In this series, I want to take a dive into the project structure for both versions of the tooling. Later, we will discuss the compatibility and migration between the two versions.
- VS 2015 .NET Core Project Overview
- VS 2017 .NET Core Project Overview
- Migrating .NET Core Projects
VS 2015 .NET Core Project Overview
With the .NET Core Tools there are two ways we can create a project.
Lets start by creating a project with the command line tools.
Command Line Tools
Creating a new project is done with the
new command.
> dotnet new Created new C# project in C:\Jason\Demo\JrTech.Core101.Cmd.
This command will create two new files.
- Program.cs
- Project.json
Inside Program.cs file we see the traditional “Hello World!” message.
using System; namespace ConsoleApplication { public class Program { public static void Main(string[] args) { Console.WriteLine("Hello World!"); } } }
The project.json file is where the project metadata is defined. This replaces the project file from previous .NET projects.
{ "version": "1.0.0-*", "buildOptions": { "debugType": "portable", "emitEntryPoint": true }, "dependencies": {}, "frameworks": { "netcoreapp1.0": { "dependencies": { "Microsoft.NETCore.App": { "type": "platform", "version": "1.0.1" } }, "imports": "dnxcore50" } } }
Now lets use the
restore command to download all of our dependencies.
> dotnet restore log : Restoring packages for C:\Jason\Demo\JrTech.Core101.Cmd\project.json... log : Writing lock file to disk. Path: C:\Jason\Demo\JrTech.Core101.Cmd\project.lock.json log : C:\Jason\Demo\JrTech.Core101.Cmd\project.json log : Restore completed in 887ms.
We can see that a project.lock.json file has been created. During the package restore process, the dependencies from the project.json are analyzed so they can be downloaded with nuget. Not only do we want to gather the projects package references but, also the dependencies of those packages as well. For projects with a large amount of dependencies this can be a significant amount of work. As such, this information is cached in the project.lock.json file so, we can quickly restore in the future. The thing to keep in mind is this file is system generated and shouldn’t be modified directly.
With our dependencies restored, we are ready to build and run our project. This can be done by simply executing the
run command.
> dotnet run Project JrTech.Core101.Cmd (.NETCoreApp,Version=v1.0) will be compiled because expected outputs are missing Compiling JrTech.Core101.Cmd for .NETCoreApp,Version=v1.0 Compilation succeeded. 0 Warning(s) 0 Error(s) Time elapsed 00:00:01.8575574 Hello World!
When it is all said and done, we end up with a project structure as shown below.
├── bin
├── obj
├── Program.cs
├── project.json
├── project.lock.json
Now this is the cool part! From here we can go on our operating system of choice (Windows, Linux, or OSX) and open the project in our favorite editor (Visual Studio Code, Sublime Text, Atom, etc). Building the project is as easy as executing a command.
Next, lets take a look at creating the same project in Visual Studio 2015.
Visual Studio
With Visual Studio open, we select File | New | Project and choose the Console Application (.NET Core) project template.
Once this is completed, we observe the following contents in the
Solution Explorer.
On disk, we see the following files.
├── global.json
├── JrTech.Core101.Vs.sln
├── src
| └── JrTech.Core101.Vs
| ├── bin
| ├── obj
| ├── Properties
| | └── AssemblyInfo.cs
| ├── JrTech.Core101.Vs.xproj
| ├── Program.cs
| ├── project.json
| └── project.lock.json
A few of the files are replicas of what we saw in the previous example however, there are a couple new files that can be found. Some of these, such as the solution file and the assembly info file, we recognize from previous versions of .NET…
Some of them we will not.
The global.json file is new in .net core. This file provides solution level metadata on the sdk, projects, and nuget packages that are referenced by the solution. Below you will see the contents of this file.
{ "projects": [ "src", "test" ], "sdk": { "version": "1.0.0-preview2-003131" } }
The other new file is the xproj file. The xproj file is used to bridge the gap between the .NET Core project and Visual Studio. In contains, any MSBuild related information and other Visual Studio specific information.
<?xml version="1.0" encoding="UTF-8"?> <Project xmlns="" ToolsVersion="14.0"> <PropertyGroup> <ActiveDebugProfile>JrTech.Core101.Vs</ActiveDebugProfile> </PropertyGroup> </Project>
Summary
In this post, we took a look at creating a .NET Core project with Visual Studio 2015 (Core Tools 1.0.0-preview2). Next in this three part series, we will create the same project with Visual Studio 2017 RC (Core Tools 1.0.0-preview4). For further reading, please take a look at the following links listed below.
-
-
- | https://espressocoder.com/2017/01/17/net-core-project-overview-pt-1/ | CC-MAIN-2019-13 | refinedweb | 785 | 61.43 |
OpenGL Programming/Basics/Structure< OpenGL Programming
Contents
Setting up a New ProjectEdit
Depending on which app you plan to code in, you might need to set up a new C++ project before you begin. The instructions for this vary based on the program, so check your program's manual if you are unsure about how to do this.
GLUTEdit
Many OpenGL applications take advantage of a cross-platform (Windows, Mac and Linux compatible) toolkit called GLUT, which is used in conjunction with OpenGL. GLUT is used to draw windows and handle mouse and keyboard events, which OpenGL cannot do on its own. There are alternatives to GLUT, but GLUT is the simplest GUI toolkit that runs cross-platform, so we assume you have GLUT installed.
There are many disadvantages to GLUT, which are related to its inflexibility. Since GLUT has to work the same across three very different operating systems, you don't have any platform-specific features. GLUT itself cannot take advantage of operating-system provided widgets (buttons, pulldown menus). Your application will not be able to change window-level menus (although context-menus can be customized). If these limitations are dealbreakers for your project, you'll have to use an alternative to GLUT. These will be discussed later in this wikibook. For now, just sit tight with GLUT.
My First OpenGL ProgramEdit
Open up (or create) your main.cpp file. You should be familiar with the basic format of this file:
#include <iostream> int main(int argc, char *argv[]) { return 0; }
Including the OpenGL HeadersEdit
We're going to need to add some includes here, so we can use OpenGL in our application. Depending on how and where you installed OpenGL, you might have to tweak these paths a bit.
#include <GL/gl.h> #include <GL/glut.h> #include <GL/glu.h>
This includes the basic OpenGL commands, GLUT and a utility toolkit called GLU which is useful later. Note that if your operating system handles filenames case sensitive (e.g. Linux) you must capitalize the GL before compiling.
If using Visual Studio, make sure to include windows.h before any of the OpenGL headers.
Creating a WindowEdit
Now that we have our library, we can begin to design our interface by creating our window inside our int main() function:
display() { /* empty function required as of glut 3.0 */ } int main(int argc, char *argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE); glutInitWindowSize(800,600); glutCreateWindow("Hello World"); glutDisplayFunc(display); glutMainLoop(); return 0; }
We first call glutInit(), which starts up GLUT for our use. Next, we set up a display mode. For now, just use the settings we've provided here for glutInitDisplayMode. You might tweak these settings later. Then we tell GLUT how big we want our window to be; 800 by 600 is OK for now. Finally, we actually create the window with glutCreateWindow(), passing the window title as an argument, and give control of our program to GLUT through glutMainLoop(). Never forget to call glutMainLoop() when you use GLUT!
You might have noticed something so far: every function is prefixed with glut-. When we start actually using OpenGL itself, every function will be prefixed with just a gl-. This is for organization.
Try compiling this application. Hopefully, you should get an 800x600 window with the title Hello World. If you don't, refer to the installation section of this wikibook and consider reinstalling.
Getting Ready to Draw StuffEdit
Most simple OpenGL applications consist of only three elements:
- A setup function, which configures everything before you start actually drawing.
- A display function, which does the actual drawing, and
- int main(), which we've mostly configured above. int main just calls the setup function, and tells GLUT to use the display function when it needs to.
Some OpenGL code might look sort of like this:
(); return 0; }
This code itself doesn't draw anything, since display() is empty.
Next Chapter OpenGL_Programming/Basics/Rectangles | https://en.m.wikibooks.org/wiki/OpenGL_Programming/Basics/Structure | CC-MAIN-2017-13 | refinedweb | 656 | 64.61 |
how to use Region.onChange in pyton with jpype
i am use jpype in pyhgon use sikuli but the code
Region.
and simple the Region and Screen is work
ex:
Region = (100,100,100,100) and screen.click() that will work
i want to ask the Region onChange is exists in Region or Screen Class and how to use it
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Sikuli Edit question
- Assignee:
- No assignee Edit question
- Last query:
- 2019-05-10
- Last reply:
- 2019-05-10
version 1.1.3
using jpype, you have to look at the Java API and use the classes, methods, ... as defined there.
The SikuliX Jython API in some places differ from the Java API.
The Jython wrapper is defined in Lib/sikuli/
BTW: I decided to use py4j as a python-to-java bridge: my project in an early stage: https:/
thx RaiMan i use jpype is not install jython but i go to C:\AppData\
onChange this Class or def and Region.py and Screen.py is not have too and i go to there
http://
look for onChange is true in Region but i don't sure there is true
and my another question [how to use config this question have solution?]
i will try this py4j thx
@RaiMan (raimund-hocke) i look the sikuli4python i just feel it good but it not have onchange class in there i hope the sikuli4python will be good and finally finish
and how to use java -jar path-to/
and it appear the ERROR title IDE not useable and content is Neither jython nor jruby available ide not yet useable with javascript only Please consult the docs for a solution
Sorry, I did not want you to use sikulix4python (very early stage), was only for information, that I am working on a solution for Python.
You should stay with jpype for no, if it works for you.
onChange(): look here for a Java example:
https:/
The actual JavaDocs are here:
https:/
I really recommend to use SikuliX version 1.1.4, since I do not support versions up to 1.1.3 anymore for such special cases as yours.
thx RaiMan but i see the https:/
there is onchange Description:
public java.lang.String onChange
a subsequently started observer in this region should wait for changes in the region and notify the given observer about this event
minimum size of changes used: Settings.
for details about APPEAR/
Parameters:
observer - ObserverCallBack
Returns:
the event's name
so i have no idea to use onchange in jpype and i will try 1.1.4 to do my work will if you have good slove with jpype onchange plz tell me thx
You have to work along the Java outline:
// one has to combine observed event and its handler
// overriding the appropriate method
someRegion.
new ObserverCallBack() {
}
}
);
// run observation in foreground for 10 seconds
someRegion.
thx RaiMan i still no idea this code because this code language is java and how i do this in python language i use cpython and the orignal ide with python
uuups, ok ;-)
... but if you want to work with jpype against a Java API, you need some basic knowledge about how Java classes/
I have no time to check: it might be possible, that the above Java-typical callback construct cannot be implemented with jpype directly and might need a Python workaround.
i know thx RaiMan so if i want to use onchange in java api should to override the onchange? but why sikulix ide can use and don't override is not same? i just a beginner so... because i know ide is source in java and use python language i use in python to Region.Methods find onchange it show the onchange it not find 。that ok i will find another slove thx
Thanks RaiMan, that solved my question.
look like i find onchange but it another error
jpype._
at the Java level use:
someRegion.
where long is the minimum size for changed pixels (default 50)
so my Java example again with 2 parameters:// one has to combine observed event and its handler
// overriding the appropriate method
someRegion.
new ObserverCallBack() {
}
}
);
// run observation in foreground for 10 seconds
someRegion.
-- but why sikulix ide can use?
It is internally implemented using Jython internals on the Python side and reflection on the Java side.
so the onchange is work but i need to rewrite the ObserverCallBack the function ?
i do something but it not finish i find onchange and can use but it not return something
my method :
Jappeared = ObserverCallBac
def change(event):
print('change')
class Appeared(
pass
this class can override the father Jappeared and use
someRegion.
this can work but can not print('change') but the screen is sure change
i think is the // here goes your handler code need something? but i see the java api appeared the method is not print or return something
Nice try ;-)
IMHO:
class CallBack(
def changed(event): # this is what you have to overwrite
print ("in change handler")
someRegion.
hi RaiMan i do you told this but have some problem there is error
class CallBack(
and my code is
class CallBack(
def changed(event):
print ("change")
someRegion.
sorry, but I do not have the time to dive into that problem area.
As mentioned: I do not plan to work with jpype anyways.
ok thx RaiMan
RuntimeError: No matching overloads found for onChange in find. at native\
common\ jp_method. cpp:127 | https://answers.launchpad.net/sikuli/+question/680652 | CC-MAIN-2020-24 | refinedweb | 917 | 75.64 |
Pipes, another cross-program communication device, are made available in Python with the built-in os.pipe call..
Pipes are much more within the operating system, though. For instance, calls to read a pipe will normally block the caller until data becomes available (i.e., is sent by the program on the other end), rather than returning an end-of-file indicator. Because of such properties, pipes are also a way to synchronize the execution of independent programs.
3.6.1 Anonymous Pipe Basics
Pipes come in two flavors -- anonymous and named. Named pipes (sometimes called "fifos") are represented by a file on your computer. Anonymous pipes only exist within processes, though, and are typically used in conjunction with process forks as a way to link parent and spawned child processes within an application -- parent and child converse over shared pipe file descriptors. Because named pipes are really external files, the communicating processes need not be related at all (in fact, they can be independently started programs).
Since they are more traditional, let's start with a look at anonymous pipes. To illustrate, the script in Example 3-15 uses the os.fork call to make a copy of the calling process as usual (we met forks earlier in this chapter). After forking, the original parent process and its child copy speak through the two ends of a pipe created with os.pipe prior to the fork. The os.pipe call.
Example 3-15. PP2ESystemProcessespipe1.py
import os, time def child(pipeout): zzz = 0 while 1: time.sleep(zzz) # make parent wait os.write(pipeout, 'Spam %03d' % zzz) # send to parent zzz = (zzz+1) % 5 # goto 0 after 4 def parent( ): pipein, pipeout = os.pipe( ) # make 2-ended pipe if os.fork( ) == 0: # copy this process child(pipeout) # in copy, run child else: # in parent, listen to pipe while 1: line = os.read(pipein, 32) # blocks until data sent print 'Parent %d got "%s" at %s' % (os.getpid( ), line, time.time( )) parent( )
If you run this program on Linux ( pipe is available on Windows today, but fork is not), the parent process waits for the child to send data on the pipe each time it calls os.read. It's almost as if the child and parent act as client and server here -- the parent starts the child and waits for it to initiate communication.[7] Just to tease, the child keeps the parent waiting one second longer between messages with time.sleep calls, until the delay has reached four seconds. When the zzz delay counter hits 005, it rolls back down to 000 and starts again:
[7] We will clarify the notions of "client" and "server" in Chapter 10. There, we'll communicate with sockets (which are very roughly like bidirectional pipes for networks), but the overall conversation model is similar. Named pipes (fifos), described later, are a better match to the client/server model, because they can be accessed by arbitrary, unrelated processes (no forks are required). But as we'll see, the socket port model is generally used by most Internet scripting protocols.
[mark@toy]$ python pipe1.py Parent 1292 got "Spam 000" at 968370008.322 Parent 1292 got "Spam 001" at 968370009.319 Parent 1292 got "Spam 002" at 968370011.319 Parent 1292 got "Spam 003" at 968370014.319 Parent 1292 got "Spam 004Spam 000" at 968370018.319 Parent 1292 got "Spam 001" at 968370019.319 Parent 1292 got "Spam 002" at 968370021.319 Parent 1292 got "Spam 003" at 968370024.319 Parent 1292 got "Spam 004Spam 000" at 968370028.319 Parent 1292 got "Spam 001" at 968370029.319 Parent 1292 got "Spam 002" at 968370031.319 Parent 1292 got "Spam 003" at 968370034.319
If you look closely, you'll see that when the child's delay counter hits 004, the parent ends up reading two messages from the pipe at once -- the child wrote two distinct messages, but they were close enough in time to be fetched as a single unit by the parent. Really, the parent blindly asks to read at most 32 bytes each time, but gets back whatever text is available in the pipe (when it becomes available at all). To distinguish messages better, we can mandate a separator character in the pipe. An end-of-line makes this easy, because we can wrap the pipe descriptor in a file object with os.fdopen, and rely on the file object's readline method to scan up through the next separator in the pipe. Example 3-16 implements this scheme.
Example 3-16. PP2ESystemProcessespipe2.py
# same as pipe1.py, but wrap pipe input in stdio file object # to read by line, and close unused pipe fds in both processes import os, time def child(pipeout): zzz = 0 while 1: time.sleep(zzz) # make parent wait os.write(pipeout, 'Spam %03d ' % zzz) # send to parent zzz = (zzz+1) % 5 # roll to 0 at 5 def parent( ): pipein, pipeout = os.pipe( ) # make 2-ended pipe if os.fork( ) == 0: # in child, write to pipe os.close(pipein) # close input side here child(pipeout) else: # in parent, listen to pipe os.close(pipeout) # close output side here pipein = os.fdopen(pipein) # make stdio input object while 1: line = pipein.readline( )[:-1] # blocks until data sent print 'Parent %d got "%s" at %s' % (os.getpid( ), line, time.time( )) parent( )
This version has also been augmented to close the unused end of the pipe in each process (e.g., after the fork, the parent process closes its copy of the output side of the pipe written by the child); programs should close unused pipe ends in general. Running with this new version returns a single child message to the parent each time it reads from the pipe, because they are separated with markers when written:
[mark@toy]$ python pipe2.py Parent 1296 got "Spam 000" at 968370066.162 Parent 1296 got "Spam 001" at 968370067.159 Parent 1296 got "Spam 002" at 968370069.159 Parent 1296 got "Spam 003" at 968370072.159 Parent 1296 got "Spam 004" at 968370076.159 Parent 1296 got "Spam 000" at 968370076.161 Parent 1296 got "Spam 001" at 968370077.159 Parent 1296 got "Spam 002" at 968370079.159 Parent 1296 got "Spam 003" at 968370082.159 Parent 1296 got "Spam 004" at 968370086.159 Parent 1296 got "Spam 000" at 968370086.161 Parent 1296 got "Spam 001" at 968370087.159 Parent 1296 got "Spam 002" at 968370089.159
3.6.2 Bidirectional IPC with Pipes
Pipes normally only let data flow in one direction -- one side is input, one is output. What if you need your programs to talk back and forth, though? For example, one program might send another a request for information, and then wait for that information to be sent back. A single pipe can't generally handle such bidirectional conversations, but two pipes can -- one pipe can be used to pass requests to a program, and another can be used to ship replies back to the requestor.[8]
[8] This really does have real-world applications. For instance, I once added a GUI interface to a command-line debugger for a C-like programming language by connecting two processes with pipes. The GUI ran as a separate process that constructed and sent commands to the existing debugger's input stream pipe and parsed the results that showed up in the debugger's output stream pipe. In effect, the GUI acted like a programmer typing commands at a keyboard. By spawning command-line programs with streams attached by pipes, systems can add new interfaces to legacy programs.
The module in Example 3-17 demonstrates one way to apply this idea to link the input and output streams of two programs. Its spawn function forks a new child program, and connects the input and output streams of the parent to the output and input streams of the child. That is:
The net effect is that the two independent programs communicate by speaking over their standard streams.
Example 3-17. PP2ESystemProcessespipes.py
############################################################ # spawn a child process/program, connect my stdin/stdout # to child process's stdout/stdin -- my reads and writes # map to output and input streams of the spawned program; # much like popen2.popen2 plus parent stream redirection; ############################################################ import os, sys def spawn(prog, *args): # pass progname, cmdline args stdinFd = sys.stdin.fileno( ) # get descriptors for streams stdoutFd = sys.stdout.fileno( ) # normally stdin=0, stdout=1 parentStdin, childStdout = os.pipe( ) # make two ipc pipe channels childStdin, parentStdout = os.pipe( ) # pipe returns (inputfd, outoutfd) pid = os.fork( ) # make a copy of this process if pid: os.close(childStdout) # in parent process after fork: os.close(childStdin) # close child ends in parent os.dup2(parentStdin, stdinFd) # my sys.stdin copy = pipe1[0] os.dup2(parentStdout, stdoutFd) # my sys.stdout copy = pipe2[1] else: os.close(parentStdin) # in child process after fork: os.close(parentStdout) # close parent ends in child os.dup2(childStdin, stdinFd) # my sys.stdin copy = pipe2[0] os.dup2(childStdout, stdoutFd) # my sys.stdout copy = pipe1[1] args = (prog,) + args os.execvp(prog, args) # new program in this process assert 0, 'execvp failed!' # os.exec call never returns here if __name__ == '__main__': mypid = os.getpid( ) spawn('python', 'pipes-testchild.py', 'spam') # fork child program print 'Hello 1 from parent', mypid # to child's stdin sys.stdout.flush( ) # subvert stdio buffering reply = raw_input( ) # from child's stdout sys.stderr.write('Parent got: "%s" ' % reply) # stderr not tied to pipe! print 'Hello 2 from parent', mypid sys.stdout.flush( ) reply = sys.stdin.readline( ) sys.stderr.write('Parent got: "%s" ' % reply[:-1])
This spawn function in this module does not work on Windows -- remember, fork isn't yet available there today. In fact, most of the calls in this module map straight to Unix system calls (and may be arbitrarily terrifying on first glance to non-Unix developers). We've already met some of these (e.g., os.fork), but much of this code depends on Unix concepts we don't have time to address well in this text. But in simple terms, here is a brief summary of the system calls demonstrated in this code:
In terms of connecting standard streams, os.dup2 is the real nitty-gritty here. For example, the call os.dup2(parentStdin,stdinFd) essentially assigns the parent process's stdin file to the input end of one of the two pipes created; all stdin reads will henceforth come from the pipe. By connecting the other end of this pipe to the child process's copy of the stdout stream file with os.dup2(childStdout,stdoutFd), text written by the child to its sdtdout winds up being routed through the pipe to the parent's stdin stream.
To test this utility, the self-test code at the end of the file spawns the program shown in Example 3-18 in a child process, and reads and writes standard streams to converse with it over two pipes.
Example 3-18. PP2ESystemProcessespipes-testchild.py
import os, time, sys mypid = os.getpid( ) parentpid = os.getppid( ) sys.stderr.write('Child %d of %d got arg: %s ' % (mypid, parentpid, sys.argv[1])) for i in range(2): time.sleep(3) # make parent process wait by sleeping here input = raw_input( ) # stdin tied to pipe: comes from parent's stdout time.sleep(3) reply = 'Child %d got: [%s]' % (mypid, input) print reply # stdout tied to pipe: goes to parent's stdin sys.stdout.flush( ) # make sure it's sent now else blocks
Here is our test in action on Linux; its output is not incredibly impressive to read, but represents two programs running independently and shipping data back and forth through a pipe device managed by the operating system. This is even more like a client/server model (if you imagine the child as the server). The text in square brackets in this output went from the parent process, to the child, and back to the parent again -- all through pipes connected to standard streams:
[mark@toy]$ python pipes.py Child 797 of 796 got arg: spam Parent got: "Child 797 got: [Hello 1 from parent 796]" Parent got: "Child 797 got: [Hello 2 from parent 796]"
3.6.2.1 Deadlocks, flushes, and unbuffered streams
These two processes engage in a simple dialog, but it's already enough to illustrate some of the dangers lurking in cross-program communications. First of all, notice that both programs need to write to stderr to display a message -- their stdout streams are tied to the other program's input stream. Because processes share file descriptors, stderr is the same in both parent and child, so status messages show up in the same place.
More subtly, note that both parent and child call sys.stdout.flush after they print text to the stdout stream. Input requests on pipes normally block the caller if there is no data available, but it seems that shouldn't be a problem in our example -- there are as many writes as there are reads on the other side of the pipe. By default, though, sys.stdout is buffered, so the printed text may not actually be transmitted until some time in the future (when the stdio output buffers fill up). In fact, if the flush calls are not made, both processes will get stuck waiting for input from the other -- input that is sitting in a buffer and is never flushed out over the pipe. They wind up in a deadlock state, both blocked on raw_input calls waiting for events that never occur.
Keep in mind that output buffering is really a function of the filesystem used to access pipes, not pipes themselves (pipes do queue up output data, but never hide it from readers!). In fact it only occurs in this example because we copy the pipe's information over to sys.stdout -- a built-in file object that uses stdio buffering by default. However, such anomalies can also occur when using other cross-process tools, such as the popen2 and popen3 calls introduced in Chapter 2.
In general terms, if your programs engage in a two-way dialogs like this, there are at least three ways to avoid buffer-related deadlock problems:
The last technique merits a few more words. Try this: delete all the sys.stdout.flush calls in both Examples Example 3-17 and Example 3-18 (files pipes.py and pipes-testchild.py) and change the parent's spawn call in pipes.py to this (i.e., add a -u command-line argument):
spawn('python', '-u', 'pipes-testchild.py', 'spam')
Then start the program with a command line like this: python -u pipes.py. It will work as it did with manual stdout flush calls, because stdout will be operating in unbuffered mode. Deadlock in general, though, is a bigger problem than we have space to address here; on the other hand, if you know enough to want to do IPC in Python, you're probably already a veteran of the deadlock wars.
3.6.3 Named Pipes (Fifos)
On some platforms, it is also possible to create a pipe that exists as a file. Such files are called "named pipes" (or sometimes, "fifos"), because they behave just like the pipes created within the previous programs, but are associated with a real file somewhere on your computer, external to any particular program. Once a named pipe file is created, processes read and write it using normal file operations. Fifos are unidirectional streams, but a set of two fifos can be used to implement bidirectional communication just as we did for anonymous pipes in the prior section.
Because fifos are files, they are longer-lived than in-process pipes and can be accessed by programs started independently. The unnamed, in-process pipe examples thus far depend on the fact that file descriptors (including pipes) are copied to child processes. With fifos, pipes are accessed instead by a filename visible to all programs regardless of any parent/child process relationships. Because of that, they are better suited as IPC mechanisms for independent client and server programs; for instance, a perpetually running server program may create and listen for requests on a fifo, that can be accessed later by arbitrary clients not forked by the server.
In Python, named pipe files are created with the os.mkfifo call, available today on Unix-like platforms and Windows NT (but not on Windows 95/98). This only creates the external file, though; to send and receive data through a fifo, it must be opened and processed as if it were a standard file. Example 3-19 is a derivation of the pipe2.py script listed earlier, written to use fifos instead of anonymous pipes.
Example 3-19. PP2ESystemProcessespipefifo.py
######################################################### # named pipes; os.mkfifo not avaiable on Windows 95/98; # no reason to fork here, since fifo file pipes are # external to processes--shared fds are irrelevent; ######################################################### import os, time, sys fifoname = '/tmp/pipefifo' # must open same name def child( ): pipeout = os.open(fifoname, os.O_WRONLY) # open fifo pipe file as fd zzz = 0 while 1: time.sleep(zzz) os.write(pipeout, 'Spam %03d ' % zzz) zzz = (zzz+1) % 5 def parent( ): pipein = open(fifoname, 'r') # open fifo as stdio object while 1: line = pipein.readline( )[:-1] # blocks until data sent print 'Parent %d got "%s" at %s' % (os.getpid( ), line, time.time( )) if __name__ == '__main__': if not os.path.exists(fifoname): os.mkfifo(fifoname) # create a named pipe file if len(sys.argv) == 1: parent( ) # run as parent if no args else: # else run as child process child( )
Because the fifo exists independently of both parent and child, there's no reason to fork here -- the child may be started independently of the parent, as long as it opens a fifo file by the same name. Here, for instance, on Linux the parent is started in one xterm window, and then the child is started in another. Messages start appearing in the parent window only after the child is started:
[mark@toy]$ python pipefifo.py Parent 657 got "Spam 000" at 968390065.865 Parent 657 got "Spam 001" at 968390066.865 Parent 657 got "Spam 002" at 968390068.865 Parent 657 got "Spam 003" at 968390071.865 Parent 657 got "Spam 004" at 968390075.865 Parent 657 got "Spam 000" at 968390075.867 Parent 657 got "Spam 001" at 968390076.865 Parent 657 got "Spam 002" at 968390078.865 [mark@toy]$ file /tmp/pipefifo /tmp/pipefifo: fifo (named pipe) [mark@toy]$ python pipefifo.py -child | https://flylib.com/books/en/2.723.1/pipes.html | CC-MAIN-2019-39 | refinedweb | 3,091 | 73.58 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
The test suite for Boost.TR1 is relatively lightweight; tests have been added
to the Boost.Config test suite for each new configuration macro, and each TR1
component has a very short concept check test added. The concept test programs
are designed only to verify that all the TR1 components that are supposed to
be in
namespace std::tr1 are
indeed present and have standards conforming interfaces. There are a few test
programs (those which end in the suffix "_tricky") which do not currently
compile with the Boost.TR1 implementation, because the relevant Boost libraries
have not yet implemented the features tested; hopefully these incompatibilities
will be removed in future releases.
The concept tests do not take account of compiler defects (quite deliberately so); the intent is that the tests can be used to verify conformance with the standard, both for Boost code, and for third party implementations. Consequently very many of these tests are known to fail with older compilers. This should not be taken as evidence that these compilers can not be used at all with Boost.TR1, simply that there are features missing that make those compilers non-conforming.
Full runtime tests for TR1 components are not in general part of this test suite, however, it is hoped that the Boost.TR1 component authors will make their regular test suites compile with the standards conforming headers as well as the Boost-specific ones. This will allow these tests to be used against the standard library's own TR1 implementation as well as the Boost one. | http://www.boost.org/doc/libs/1_43_0/doc/html/boost_tr1/testing.html | CC-MAIN-2018-13 | refinedweb | 281 | 60.24 |
0
Hi!
I am supposed to write a programm for making a window.First of all, I must make a window with some given default values. Below is the code that I have been given as a template
public class Window { private String name; private int height; private int width; private String isActive; private String color; int[] WindowPoint=new int[2]; public Window() { name="MyWindow"; height=6; width=12; WindowPoint[0] =0; WindowPoint[1] =0; color="white"; isActive="active"; }
There's something that I cannot understand: In order to make a window, shouldn't I import the JFrame package and say that the Window class is a subclass of JFrame? If so, how can I have a Window constructor instead of using the JFrame built-in constructor in order to make my window? Is there another way to make a window, without using the JFrame or something similar?
Could someone explain this to me? Thank you in advance! | https://www.daniweb.com/programming/software-development/threads/445356/constructors-for-window | CC-MAIN-2018-05 | refinedweb | 158 | 58.92 |
What about if one of the relevant comparison functions is implemented in C?
class WackyComparator(int):
def __lt__(self, other):
elem.__class__ = WackyList2
return int.__lt__(self, other)
class WackyList1(list):pass
class WackyList2(list):
def __lt__(self, other):
raise ValueError
lst =
list(map(WackyList1,[[WackyComparator(3),5],[WackyComparator(4),6],[WackyComparator(7),7]]))
random.shuffle(lst)
elem = lst[-1]
lst.sort()
This code raises ValueError, and caching seems like it would cache the
comparator for WackyList1 objects, which is the same as the comparator for
'list' objects -- and midway through comparison, one of them changes type
to WackyList2, which has its own (broken) comparison function.
Python is very very dynamic ... | https://bugs.python.org/msg289447 | CC-MAIN-2019-18 | refinedweb | 111 | 50.43 |
Overview & Use Cases
00:00 Welcome to this section of the course, where we’ll take a look at online coding environments. We’re going to take a closer look at Repl.it. First, we’ll start off again with a quick overview and then use cases for online coding environments.
00:13 Let’s take a quick look at how a Repl.it environment looks like.
00:17 You’ll probably be presented with something like this. We have a place to write our code here on the left side, and then have a shell—or Python interpreter, terminal—sitting on the right side. And to show you what this can do I’m just going to click on this examples here.
00:33
Let’s do a quick Input, for example. And I can run this, and on the right side, we see the terminal. It’s asking me for my name, so I type in
Martin, press Enter, and I get the printout that’s happening here. It’s a very simple program.
00:47 But what you can notice here is that everything that I’m doing is happening on the browser—so, somewhere in the cloud, on their servers. I have a username here, so I have an account.
00:57 You don’t even have to have an account in a couple of those, but the interesting part is I don’t need to install anything on my computer—well, that is, I need to have a browser. But if I have a browser, I can access this online coding environment anywhere. I can write code,
01:13
I can change it, I can do whatever I want. I can even import things, so here I can say
import random.
01:22 You can already see also that we have code autocompletion, things like that.
01:34 Right? Ha, something like that. It doesn’t really matter. I just want to show you that you have a lot of the functionality that you have with a local coding environment, but you have it without having to deal with any sorts of virtual environments, Python installations, et cetera, et cetera.
01:49 You don’t even need to save your code anywhere because everything happens on the cloud and that is taken care of for you. A lot of online courses utilize some sort of online coding environments where they take away this effort of installing and getting things running locally, and just get you to code quickly.
02:07 This is one of the main use cases for working with one of those online coding environments. You really don’t have to worry about how to set anything up because everything is set up for you and you can simply go there, access it from anywhere, and then just run your code, get started coding.
02:24 I can take this—this is our URL—so I can go and I can open this up from anywhere else.
02:33 I’m here in an anonymous window and I type in this URL that got generated from the code that I wrote. And you can see, it’s loading.
02:47 And there it is—exactly the code that I just wrote before. I can access it. I could be on a different account now, different computer, different country, of course. And I can just run this code and see the output anyways.
02:59 There you go. So, that’s obviously very, very helpful if you want to share code with someone or if you want to work on something together—you can open this online coding environment, write some code, easily send it to someone, maybe you’re working with a student or with a mentor and you want to share some code with them, and its very interactive and collaborative.
03:20 And as opposed to GitHub, for example, where you can just have the code but you cannot execute it. In this case, everything happens online and in the cloud, so again, there’s no need for any local installs.
03:31 You just give that away to the company who’s running the online coding environment, and you just focus on the coding. Okay, so these are the main use cases, and I’m just showing you quickly how these look like, and we’re going to dive into it in more depth in this specific one called Repl.it in the coming videos. I’ll see you there.
Become a Member to join the conversation. | https://realpython.com/lessons/repl-it-overview-use-cases/ | CC-MAIN-2022-05 | refinedweb | 767 | 79.6 |
Integrations
Channels has integrations with third party applications to make it easier to use Channels and get the most out of it.
Note: Integrations are only available to Pro plans and above.
Datadog
Datadog is an application performance monitoring platform. You can add Channels as an integration in order to monitor the metrics and alerts that you are interested in tracking.
In order to add Channels Channels account settings, select the tab for Datadog Integration
- Paste your Datadog API key and save it.
- Return to your Datadog, and on the ‘Dashboards’ tab, select Pusher to configure your dashboard, seen below.
Librato
Librato is a application performance monitoring platform. You can add Channels as an integration in order to monitor the metrics and alerts that you are interested in tracking.
In order to add Pusher as an integration on Librato, you’ll need a Librato account. If you don’t have one, you can create one here.
- Access your Librato tokens page, here.
- Generate a new ‘Record Only’ or ‘Full Access’ token. If you provide us with a full access token, we can set you up with a pre-built dashboard.
- Go to your Pusher account settings, select the tab for Librato Integration
- Paste your Librato Email and newly created API key and save it.
- Return to your Librato dashboard, and on the Metrics tab, you’ll see your new Pusher Channels metrics with the
pusher.namespace.
If you provided us with a full access token, you can go to the Spaces tab and you’ll see a new dashboard (
Pusher) which is already setup to give you some insight on the metrics we’re sending. You can modify this dashboard to your liking.
Metrics
This is a list of metrics sent to our integrations. An aggregate per
app_id is sent around every 5 seconds.
Have you tried using the search to find what you’re after? If you still have a question then get in touch with us and let us help you out. | https://pusher.com/docs/integrations | CC-MAIN-2018-30 | refinedweb | 334 | 64.2 |
Question:
I am writing a action in django.I want to now about the rows which are updated by the action or say id field of the row.I want to make a log of all the actions.
I am having a field status which has 3 values :'activate','pending','reject'.I have made action for changing the status to activate.when i perform the action i want to have the log of rows updated so i need some value which can be stored in log such as id coresponding to that row
Solution:1
As far as i can understand you want make an admin log-entry for the object you update using your custom action. I actually did something like that, purely as django does it. As its your custom action you can add this piece of code.
Edit: Call this function after your action finishes, or rather i should say, after you change the status and save the object.
def log_it(request, object, change_message): """ Log this activity """ from django.contrib.admin.models import LogEntry from django.contrib.contenttypes.models import ContentType LogEntry.objects.log_action( user_id = request.user.id, content_type_id = ContentType.objects.get_for_model(object).pk, object_id = object.pk, object_repr = change_message, # Message you want to show in admin action list change_message = change_message, # I used same action_flag = 4 ) # call it after you save your object log_it(request, status_obj, "Status %s activated" % status_obj.pk)
You can always get which object you updated by fetching LogEntry object
log_entry = LogEntry.objects.filter(action_flag=4)[:1] log_entry[0].get_admin_url()
Hope this helps.
Solution:2
It is very easy!
Just make a loop of your queryset, then you can access each field of that row and store it where you want.
for e in queryset: if (e.status != "pending"): flag = False
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/05/tutorial-django-admin-action-in-11.html | CC-MAIN-2018-51 | refinedweb | 316 | 58.69 |
CLONESection: Linux Programmer's Manual (2)
Updated: 2007-06-01
Index Return to Main Contents
NAMEclone - create a child process
SYNOPSIS
#include <sched.h> int clone(int (*fn)(void *), void *child_stack, int flags, void *arg, ... /* pid_t *pid, struct user_desc *tls, pid_t *ctid */ );
DESCRIPTIONclone()_FS
- If CLONE_FS is set, the caller and the child processes.
- processes.
-.
sys_cloneThe.
VERSIONSThere is no entry for clone() in libc5. glibc2 provides clone() as described in this manual page.
CONFORMING TOThe clone() and sys_clone calls are Linux specific and should not be used in programs intended to be portable.
NOTESVersions
#include <syscall.h> pid_t mypid; mypid = syscall(SYS_getpid);
SEE ALSOfork(2), futex(2), getpid(2), gettid(2), set_thread_area(2), set_tid_address(2), tkill(2), unshare(2), wait(2), capabilities(7), pthreads(7)
Index
Random Man Pages:
ssh-keyscan
snmpstatus
ddp
xsublim | http://www.thelinuxblog.com/linux-man-pages/2/clone | CC-MAIN-2014-42 | refinedweb | 134 | 57.16 |
Get the highlights in your inbox every week.
Get modular with Python functions
Get modular with Python functions
Minimize your coding workload by using Python functions for repeating tasks.
Subscribe now
Are you confused by fancy programming terms like functions, classes, methods, libraries, and modules? Do you struggle with the scope of variables? Whether you're a self-taught programmer or a formally trained code monkey, the modularity of code can be confusing. But classes and libraries encourage modular code, and modular code can mean building up a collection of multipurpose code blocks that you can use across many projects to reduce your coding workload. In other words, if you follow along with this article's study of Python functions, you'll find ways to work smarter, and working smarter means working less.
This article assumes enough Python familiarity to write and run a simple script. If you haven't used Python, read my intro to Python article first.
Functions
Functions are an important step toward modularity because they are formalized methods of repetition. If there is a task that needs to be done again and again in your program, you can group the code into a function and call the function as often as you need it. This way, you only have to write the code once, but you can use it as often as you like.
Here is an example of a simple function:
#!/usr/bin/env python3
import time
def Timer():
print("Time is " + str(time.time() ) )
Create a folder called mymodularity and save the function code as timestamp.py.
In addition to this function, create a file called __init__.py in the mymodularity directory. You can do this in a file manager or a Bash shell:
$ touch mymodularity/__init__.py
You have now created your own Python library (a "module," in Python lingo) in your Python package called mymodularity. It's not a very useful module, because all it does is import the time module and print a timestamp, but it's a start.
To use your function, treat it just like any other Python module. Here's a small application that tests the accuracy of Python's sleep() function, using your mymodularity package for support. Save this file as sleeptest.py outside the mymodularity directory (if you put this into mymodularity, then it becomes a module in your package, and you don't want that).
#!/usr/bin/env python3
import time
from mymodularity import timestamp
print("Testing Python sleep()...")
# modularity
timestamp.Timer()
time.sleep(3)
timestamp.Timer()
In this simple script, you are calling your timestamp module from your mymodularity package (twice). When you import a module from a package, the usual syntax is to import the module you want from the package and then use the module name + a dot + the name of the function you want to call (e.g., timestamp.Timer()).
You're calling your Timer() function twice, so if your timestamp module were more complicated than this simple example, you'd be saving yourself quite a lot of repeated code.
Save the file and run it:
According to your test, the sleep function in Python is pretty accurate: after three seconds of sleep, the timestamp was successfully and correctly incremented by three, with a little variance in microseconds.According to your test, the sleep function in Python is pretty accurate: after three seconds of sleep, the timestamp was successfully and correctly incremented by three, with a little variance in microseconds.$ python3 ./sleeptest.py
Testing Python sleep()...
Time is 1560711266.1526039
Time is 1560711269.1557732
The structure of a Python library might seem confusing, but it's not magic. Python is programmed to treat a folder full of Python code accompanied by an __init__.py file as a package, and it's programmed to look for available modules in its current directory first. This is why the statement from mymodularity import timestamp works: Python looks in the current directory for a folder called mymodularity, then looks for a timestamp file ending in .py.
What you have done in this example is functionally the same as this less modular version:
#!/usr/bin/env python3
import time
from mymodularity import timestamp
print("Testing Python sleep()...")
# no modularity
print("Time is " + str(time.time() ) )
time.sleep(3)
print("Time is " + str(time.time() ) )
For a simple example like this, there's not really a reason you wouldn't write your sleep test that way, but the best part about writing your own module is that your code is generic so you can reuse it for other projects.
You can make the code more generic by passing information into the function when you call it. For instance, suppose you want to use your module to test not the computer's sleep function, but a user's sleep function. Change your timestamp code so it accepts an incoming variable called msg, which will be a string of text controlling how the timestamp is presented each time it is called:
#!/usr/bin/env python3
import time
# updated code
def Timer(msg):
print(str(msg) + str(time.time() ) )
Now your function is more abstract than before. It still prints a timestamp, but what it prints for the user is undefined. That means you need to define it when calling the function.
The msg parameter your Timer function accepts is arbitrarily named. You could call the parameter m or message or text or anything that makes sense to you. The important thing is that when the timestamp.Timer function is called, it accepts some text as its input, places whatever it receives into a variable, and uses the variable to accomplish its task.
Here's a new application to test the user's ability to sense the passage of time correctly:
#!/usr/bin/env python3
from mymodularity import timestamp
print("Press the RETURN key. Count to 3, and press RETURN again.")
input()
timestamp.Timer("Started timer at ")
print("Count to 3...")
input()
timestamp.Timer("You slept until ")
Save your new application as response.py and run it:
$ python3 ./response.py
Press the RETURN key. Count to 3, and press RETURN again.
Started timer at 1560714482.3772075
Count to 3...
You slept until 1560714484.1628013
Functions and required parameters
The new version of your timestamp module now requires a msg parameter. That's significant because your first application is broken because it doesn't pass a string to the timestamp.Timer function:
$ python3 ./sleeptest.py
Testing Python sleep()...
Traceback (most recent call last):
File "./sleeptest.py", line 8, in <module>
timestamp.Timer()
TypeError: Timer() missing 1 required positional argument: 'msg'
Can you fix your sleeptest.py application so it runs correctly with the updated version of your module?
Variables and functions
By design, functions limit the scope of variables. In other words, if a variable is created within a function, that variable is available to only that function. If you try to use a variable that appears in a function outside the function, an error occurs.
Here's a modification of the response.py application, with an attempt to print the msg variable from the timestamp.Timer() function:
#!/usr/bin/env python3
from mymodularity import timestamp
print("Press the RETURN key. Count to 3, and press RETURN again.")
input()
timestamp.Timer("Started timer at ")
print("Count to 3...")
input()
timestamp.Timer("You slept for ")
print(msg)
Try running it to see the error:
$ python3 ./response.py
Press the RETURN key. Count to 3, and press RETURN again.
Started timer at 1560719527.7862902
Count to 3...
You slept for 1560719528.135406
Traceback (most recent call last):
File "./response.py", line 15, in <module>
print(msg)
NameError: name 'msg' is not defined
The application returns a NameError message because msg is not defined. This might seem confusing because you wrote code that defined msg, but you have greater insight into your code than Python does. Code that calls a function, whether the function appears within the same file or if it's packaged up as a module, doesn't know what happens inside the function. A function independently performs its calculations and returns what it has been programmed to return. Any variables involved are local only: they exist only within the function and only as long as it takes the function to accomplish its purpose.
Return statements
If your application needs information contained only in a function, use a return statement to have the function provide meaningful data after it runs.
They say time is money, so modify your timestamp function to allow for an imaginary charging system:
#!/usr/bin/env python3
import time
def Timer(msg):
print(str(msg) + str(time.time() ) )
charge = .02
return charge
The timestamp module now charges two cents for each call, but most importantly, it returns the amount charged each time it is called.
Here's a demonstration of how a return statement can be used:
#!/usr/bin/env python3
from mymodularity import timestamp
print("Press RETURN for the time (costs 2 cents).")
print("Press Q RETURN to quit.")
total = 0
while True:
kbd = input()
if kbd.lower() == "q":
print("You owe $" + str(total) )
exit()
else:
charge = timestamp.Timer("Time is ")
total = total+charge
In this sample code, the variable charge is assigned as the endpoint for the timestamp.Timer() function, so it receives whatever the function returns. In this case, the function returns a number, so a new variable called total is used to keep track of how many changes have been made. When the application receives the signal to quit, it prints the total charges:
$ python3 ./charge.py
Press RETURN for the time (costs 2 cents).
Press Q RETURN to quit.
Time is 1560722430.345412
Time is 1560722430.933996
Time is 1560722434.6027434
Time is 1560722438.612629
Time is 1560722439.3649364
q
You owe $0.1
Inline functions
Functions don't have to be created in separate files. If you're just writing a short script specific to one task, it may make more sense to just write your functions in the same file. The only difference is that you don't have to import your own module, but otherwise the function works the same way. Here's the latest iteration of the time test application as one file:
#!/usr/bin/env python3
import time
total = 0
def Timer(msg):
print(str(msg) + str(time.time() ) )
charge = .02
return charge
print("Press RETURN for the time (costs 2 cents).")
print("Press Q RETURN to quit.")
while True:
kbd = input()
if kbd.lower() == "q":
print("You owe $" + str(total) )
exit()
else:
charge = Timer("Time is ")
total = total+charge
It has no external dependencies (the time module is included in the Python distribution), and produces the same results as the modular version. The advantage is that everything is located in one file, and the disadvantage is that you cannot use the Timer() function in some other script you are writing unless you copy and paste it manually.
Global variables
A variable created outside a function has nothing limiting its scope, so it is considered a global variable.
An example of a global variable is the total variable in the charge.py example used to track current charges. The running total is created outside any function, so it is bound to the application rather than to a specific function.
A function within the application has access to your global variable, but to get the variable into your imported module, you must send it there the same way you send your msg variable.
Global variables are convenient because they seem to be available whenever and wherever you need them, but it can be difficult to keep track of their scope and to know which ones are still hanging around in system memory long after they're no longer needed (although Python generally has very good garbage collection).
Global variables are important, though, because not all variables can be local to a function or class. That's easy now that you know how to send variables to functions and get values back.
Wrapping up functions
You've learned a lot about functions, so start putting them into your scripts—if not as separate modules, then as blocks of code you don't have to write multiple times within one script. In the next article in this series, I'll get into Python classes.
6 Comments
This is fabulous Seth. You’ve illustrated and explained much that I have seen nowhere else. This is the best article I’ve seen on functions.
Thanks, Don. I hope it helps new programmers everywhere, because - wow, could I have used this information back when I was getting started!
i still make little mistakes with local and global variables. this is quite insightful.
noce
Good article. The python articles are my fav!
Very nice article. I also have few questions.
Is it possible to create module inside a module?
Is it possible to create a function inside a function?
How to use those hierarchical functions and modules? | https://opensource.com/article/19/7/get-modular-python-functions | CC-MAIN-2020-05 | refinedweb | 2,162 | 65.93 |
Good day guys. I have been trying to find a string that talks of how to embed a webpage into Dash app, where the content that will be displayed will change whenever someone selects something either through a dropdown menu or via a button selection. eg
app.layout=html.Div([
html.Div([ Drodown])
html.Div([webpage is embedded here based on value selected])
])
@app.callback([…
Good day guys. I have been trying to find a string that talks of how to embed a webpage into Dash app, where the content that will be displayed will change whenever someone selects something either through a dropdown menu or via a button selection. eg
I imagine you could achieve this through a Dash callback that targets the ‘children’ attribute of the container element you want the page to be embedded in, and then depending on the dropdown value, return an iframe with the appropriate ‘src’ attribute.
more or less what am looking to achieve. any link to an example. am new to dash and have been experimenting with it, actually havent done any webdevelopment but was used to python for doing computations and simulations, so now am trying to make it attractive for a front end viewer
Here’s an example I made up:
import dash from dash.dependencies import Input, Output import dash_core_components as dcc import dash_html_components as html app = dash.Dash() app.layout = html.Div([ html.Div(id='target'), dcc.Dropdown( id='dropdown', options=[ {'label': 'Video 1', 'value': 'video1'}, {'label': 'Video 2', 'value': 'video2'}, {'label': 'Video 3', 'value': 'video3'}, ], value='video1' ) ]) @app.callback(Output('target', 'children'), [Input('dropdown', 'value')]) def embed_iframe(value): videos = { 'video1': 'sea2K4AuPOk', 'video2': '5BAthiN0htc', 'video3': 'e4ti2fCpXMI', } return html.Iframe(src=f'{videos[value]}')
Where I can see the output ?
is it under… Please confirm @nedned @chriddyp
Do we need to use the same link for all our codes or Can we customize ?
Thanks !
If you’re running the app on your local machine, then yes, you will be accessing it from (or). You can however specify a different port number when invoking the run_server method on the dash app (eg
app.run_server(port=5000)). This is useful if you want to have multiple apps running simultaneously.
If you want to make it publicly available, you can of course look into various hosting options, including using your own domain name.
Hi @nedned I am wondering if there is a way also to embed PDF files using html.Iframe.
Here is below how I tried but could succeed.
import dash import dash_html_components as html import base64 app = JupyterDash('aa') app.layout = html.Div([ html.H1('Title'), html.Iframe( src = "assets/example.pdf") ]) app
It is been 2 years for this conversion but I really appreciate if you can help me. | https://community.plotly.com/t/embedding-a-webpage-into-dash-app/6464 | CC-MAIN-2020-45 | refinedweb | 460 | 62.17 |
SQL decomposition and testing (SQLite)Page 1 of 1
14 Replies - 1983 Views - Last Post: 27 January 2014 - 01:21 PM
#1
SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 06:54 AM
In a personal project that I'm planning, I want to keep as much of the logic in the database as possible. I usually avoid all things database and have been looking for an excuse to get my hands dirty for a while. This project seems ideal because the data that the user will want to see is a distilled summary of the data that needs to be stored in the database.
At the same time, I have a work project that deals with analysing some scientific data. I've done the required analysis for some experiments "manually" in a spreadsheet and think it's time to automate it. Rather than my usual approach of writing some scripts to parse the data, I thought it was about time to do this sort of stuff in a database.
SQLite seemed great for both these (embedded application database for the former and standalone engine for data processing for the latter)
Questions:
What are the best techniques for decomposing SQL. This is difficult to google because I keep getting this. However, I have found a couple of pages saying views are useful for decomposing and reusing SQL. I've also found some comments saying it's a bad idea and others saying it helps performance, although I'm sure this is dependant on the DBMS. Are there better things I could be doing?
Following on from this, what is the best way of separating logical aspects of my design? For example, my second project might have several tables in each category of raw data, interim processing, final results, visualisation scripts. Java has packages. .NET has namespaces. Is there any equivalent for databases?
Finally, what about testing? I would usually do this stuff in a programming language and write unit tests. I could still use unit tests to populate some data, test the results and clean up the test database but I can't help wondering if there is a better tool.
Replies To: SQL decomposition and testing (SQLite)
#2
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 07:00 AM
is this what you are asking about?
#3
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 07:26 AM
Are you talking about stored procedures? SQLite doesn't have them.
#4
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 07:41 AM
Right, you have tables. Most of your joins are those tables are to essentially de-normalize things. Most of the rest aggregate. You can make views to those tables that are all the foreign key style joins you could make. Make the logical group bys. There are probably more, but you'll figure it out as you go. Now, from this point forward, you consider your tables and views the record sets available: no more joins or grouping.
At this point, the only thing you need work with are essentially filters. Where clauses. These can simply be picked from the available fields and use basic where grammar.
#5
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 07:59 AM
cfoley, on 27 January 2014 - 09:26 AM, said:
Are you talking about stored procedures? SQLite doesn't have them.
i would reccommend dl ms sql -- it will make your life a lot easier
#6
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 09:34 AM
Quote
I think this is pretty much what I was talking about, but expressed far more clearly. Thanks!
What about testing then? In making those views of joins and group bys, I will also be including formulae. One example would be standard deviation (chosen because SQLite doesn't appear to have a function for that). This is something I would want to test if it were coded in a traditional language and I feel like I should do so here. Is the done thing to throw together some unit tests in another language or is there something better?
Quote
It seems to be Windows only. The attraction of SQLite to me is the lack of installation, configuration and its availability on multiple platforms (I work in Windows and Linux and aspire to own a Mac one day). What makes you recommend MS SQL? Is it just the stored procedures thing?
#7
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 09:45 AM
cfoley, on 27 January 2014 - 11:34 AM, said:
SQL is a declarative language. In essence, it is a functional language. If you've written your expression correctly and it returns the result set you expect, you're pretty much done.
The only way it would be invalid is if the data returned didn't conform to expectations. That test is done by users. If you can think of a unit test for a SQL query, then you've probably already written your SQL to pass it.
It's more of a GIGO (Garbage In, Garbage Out) thing. Your SQL will always validate, even if your question is dumb.
#8
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 09:50 AM
here is a simple proceedure that we use everyday -- i have removed the procedure name because of obvious reasons
alter procedure[some_procedure] @business_id int, @vendor_class_id int AS BEGIN SELECT distinct t_Vendor_Employee.vendor_id,t_Vendor_Employee.vendor_employee_id, t_Vendor.txt_Company_Name, t_Employee.txt_first_name, t_Employee.txt_last_name, t_Employee.txt_email, t_state.txt_state, t_vendor_address.txt_address_1, t_vendor_address.txt_city, t_vendor_address.txt_zip_code ,t_Business_Vendor_Status.txt_Status_Name , (select top 1 t_Vendor_Class.txt_Vendor_Class_Name from t_Vendor_Class inner join t_Vendor_Department on t_Vendor_Department.vendor_class_ID = t_Vendor_Class.vendor_class_ID where [email protected]_class_id)as V_class from t_Vendor_Employee INNER JOIN t_Employee with(nolock)ON t_Vendor_Employee.employee_ID = t_Employee.employee_ID INNER JOIN t_Vendor with(nolock)ON t_Vendor_Employee.Vendor_ID = t_Vendor.vendor_ID inner join t_Business_Vendor with(nolock)on t_business_vendor.Vendor_ID=t_vendor_employee.Vendor_ID inner join t_Business_Vendor_Employee_Detail with(nolock)on t_Business_Vendor_Employee_Detail.Business_Vendor_ID=t_Business_Vendor.business_Vendor_ID and t_Business_Vendor_Employee_Detail.Vendor_Employee_ID=t_vendor_employee.vendor_employee_id INNER JOIN t_Business_Vendor_Status with(nolock)ON t_Business_Vendor_Status.Business_Vendor_Status_ID=t_Business_Vendor_Employee_Detail.Business_Vendor_Status_ID inner join t_vendor_address_detail with(nolock)on t_vendor.vendor_ID = t_vendor_address_detail.vendor_id inner join t_vendor_address with(nolock)on t_vendor_address_detail.vendor_address_id = t_vendor_address.vendor_address_id inner join t_State with(nolock)on t_vendor_address.state_ID = t_State.state_id left outer join t_Vendor_Department with(nolock) on t_Vendor_Department.Vendor_ID = t_Business_Vendor.Vendor_ID WHERE [email protected]_id and t_vendor.vendor_id in (select vendor_id from t_Vendor_Department where [email protected]_class_id) and t_Business_Vendor_Status.Business_Vendor_Status_ID = 5 order by t_state.txt_state, t_Vendor_Employee.vendor_employee_id end
#9
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 10:36 AM
Right, rule #1 with stored procedures. They should only be used if a view won't work. In particular, loops should only be used if all else fails. Additionally, using IN with a select is worst case scenario.
Forgive, but I had to refactor:
SELECT distinct ve.vendor_id, ve.vendor_employee_id, v.txt_Company_Name, e.txt_first_name, e.txt_last_name, e.txt_email, s.txt_state, va.txt_address_1, va.txt_city, va.txt_zip_code ,bvs.txt_Status_Name, vc.V_class, vd.vendor_class_ID, be.business_id from t_Vendor_Employee ve INNER JOIN t_Employee e ON ve.employee_ID = e.employee_ID INNER JOIN t_Vendor v ON ve.Vendor_ID = v.vendor_ID inner join t_Business_Vendor be on be.Vendor_ID=ve.Vendor_ID inner join t_Business_Vendor_Employee_Detail bved on bved.Business_Vendor_ID=be.business_Vendor_ID and bved.Vendor_Employee_ID=ve.vendor_employee_id INNER JOIN t_Business_Vendor_Status bvs ON bvs.Business_Vendor_Status_ID=bved.Business_Vendor_Status_ID and bvs.Business_Vendor_Status_ID = 5 inner join t_vendor_address_detail vad on v.vendor_ID = vad.vendor_id inner join t_vendor_address va on vad.vendor_address_id = va.vendor_address_id inner join t_State s on va.state_ID = s.state_id inner join t_Vendor_Department vd on vd.Vendor_ID = be.Vendor_ID left outer join ( select vendor_class_ID, min(txt_Vendor_Class_Name) as v_class from t_Vendor_Class ) vc on vd.vendor_class_ID=vc.vendor_class_id WHERE [email protected]_id
I'm not 100% sure I got this right. I am, however, 100% sure you don't need a stored procedure to generate the result set. Make a view. Query by the keys.
#10
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 11:02 AM
baavgai, on 27 January 2014 - 12:36 PM, said:
this is for a crystal report pull and had to be written as stated that uses a back ground runner to pull the data using xsl and vb.net. We do not use alias's here unless we absolutely have to.
#11
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 11:55 AM
DarenR, on 27 January 2014 - 01:02 PM, said:
Ok... not sure how that matters.
DarenR, on 27 January 2014 - 01:02 PM, said:
If you can execute a stored procedure, you should be able to execute a query against a view.
DarenR, on 27 January 2014 - 01:02 PM, said:
Ok. Your loss.
However, if an alias is the difference between:
WHERE t_vendor.vendor_id in ( select vendor_id from t_Vendor_Department where [email protected]_class_id )
and
inner join t_Vendor_Department vd on vd.Vendor_ID = t_vendor.Vendor_ID and [email protected]_class_id
Then you absolutely have to.
#12
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 12:27 PM
Quote
Hmm... I see similar things written about functional languages. However, from the little functional code I've written, unit testing has helped there. I know I have written SQL that returned the wrong result in the past. It wasn't until I threw some data at it that I noticed my mistake. I thought unit testing was just that: throwing data at some code to see if it meets expectations. I have a hard time believing that not keeping and rerunning the test is a good thing.
I'm torn between just going along with your experience or bashing out test code until I reach enlightenment.
Quote
I'm intrigued as to why. If I'm using SQLite then I don't have much choice but what makes views the preferred option?
#13
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 12:45 PM
cfoley, on 27 January 2014 - 02:27 PM, said:
Agreed. However, keep in mind that SQL is a very domain specific language; not general purpose. There's only so much it can, will, and should do. This makes the test domain even smaller. Essentially, testing involves feeding a query all possible data variations. Once you've done that, what else can you do?
cfoley, on 27 January 2014 - 02:27 PM, said:
Right. And, once you've seen it work with the data... what is there to test. Also, YOU had to observe the behavior. A test harness would do... what?
For procedure vs. view:
Well, if you can express your request in the form of a SQL query, you're playing to the strength of your database. Databases are most efficient when processing a SQL expressions.
A procedure is always a bit of a hack. You can only ask it one question and the black box that answers might do all kinds of heinous things behind the scenes. It can be the only way to solve a problem, but it only solves one.
A view is transparent. Since it must conform to query rules, there's only so bad it can get. It is also reusable. You can treat a view as a table and ask it any questions you like. A procedure is a one trick pony.
#14
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 01:04 PM
#15
Re: SQL decomposition and testing (SQLite)
Posted 27 January 2014 - 01:21 PM | http://www.dreamincode.net/forums/topic/339020-sql-decomposition-and-testing-sqlite/ | CC-MAIN-2018-13 | refinedweb | 1,907 | 57.98 |
Talk:Proposed features/Fire Hydrant Extensions
Contents
- 1 dry_barrel vs wet_barrel
- 2 Underground hydrants
- 3 Additional wrench type and cap vs bonnet
- 4 More location values
- 5 fire_hydrant vs suction_point
- 6 Flow and pipes capacity
- 7 Pipe Diameter
- 8 Bonnet
- 9 Remove fire_hydrant: namespace from keys
- 10 survey_date <> survey:date
- 11 request for a split
- 12 "gallon" is ambiguous
dry_barrel vs wet_barrel
How would a surveyor know the difference between dry_barrel and wet_barrel? And also how would you handle the other types of the fire_hydrant:type - underground? In the draft there is no such thing, but firefighters have to find the fire_hydrant if they don't know how what to look for how will they find it? --Krzyk (talk) 18:51, 4 July 2017 (UTC)
- the difference may be determined by looking at the configuration of the bolts. a dry barrel hydrant has a bolt head on top, connecting to a shaft that runs down to the valve at the water main. a wet barrel hydrant has individual bolts behind each of the outlets which are not present on dry barrel hydrants. --Nfgusedautoparts (talk) 20:30, 4 July 2017 (UTC)
- So basically we would need a picture for both types on the future wiki page
- And that's easy to do. I have some i can contribute. --Nfgusedautoparts (talk) 12:34, 5 July 2017 (UTC)
Underground hydrants
- in terms of finding underground hydrants, my understanding is that in the UK there is a standard way of marking their location so the firefighters know where to look. --Nfgusedautoparts (talk) 20:35, 4 July 2017 (UTC)
- But the same you can say about almost any information regarding the hydrant (and other OSM tags), generally in some (most?) countries in Europe each hydrant has to have a label with information about it, so why put all that information into OSM? :) I think we still need a way to tag undeground hydrants. --Krzyk (talk) 09:37, 5 July 2017 (UTC)
- For underground hydrants, fire_hidrant:type=underground is still valid, because the proposal doesn't deprecate it. In Italy there is no information on hydrants' labels. In any case, it's useful to have information about hydrants on a map, because you know the diameter, pressure, etc, before reaching them: you can save time going directly to the most efficient one in the neighborhood.--Viking81 (talk) 17:55, 5 July 2017 (UTC)
Additional wrench type and cap vs bonnet
AlaskaDave (talk) 21:07, 28 July 2017 (UTC) I like the proposal but would like clarification on a couple of points: you do not include "hexagonal" in the "fire_hydrant:wrench" choices, perhaps because there are none using that type of bolt head, even though it is a "standard" bolt type in other applications. Also, the proposal does not attempt to differentiate between a "cap" and a "bonnet" . Thanks for your effort on this. - Dave
- I have no objection to having Hexagonal on the list, but i'm not aware that they're actually used. Cap and Bonnet are standard terminology, i can add descriptions. Nfgusedautoparts (talk) 21:13, 28 July 2017 (UTC)
- i have added a note on colour:cap. The description of Bonnet was already sufficient. Nfgusedautoparts (talk) 22:37, 28 July 2017 (UTC)
More location values
Using the location=* key opens up a vast namespace of different values to us and it's great.
We may need to add indoor (inside a building), wall (outside but stuck into a wall) or tunnel (inside a tunnel), don't we? Fanfouer (talk) 19:55, 4 August 2017 (UTC)
- Do NOT use location=wall and location=underground. fire_hydrant:type=wall is still valid because it denotes a different shape of hydrant, not only its location. The same for fire_hydrant:type=underground.
For indoor it's better to use the existing tag indoor=yes.
For tunnels, yes, location=tunnel can be useful and it's already used for other objects. --Viking81 (talk) 20:38, 15 August 2017 (UTC)
fire_hydrant vs suction_point
To my mind water_source=* is not needed because a hydrant is always connected to the municipal water system - at least that's the definition of a hydrant here in Germany. If the water_source would be pond then it should be emergency=fire_water_pond, for stream it could be emergency=suction_point. I suggest to tag things only as hydrants which are really connected to the water system. --MoritzM (talk) 19:08, 11 August 2017 (UTC)
-
- I got the point but respectably disagree.
- emergency=fire_water_pond is intended for the pond itself, not for the connected hydrant(s).
- Then it's recommended to map at least two different features : the pond as an
and the hydrant as a
. It's usefull to give the information than a given hydrant is fed by a pond and not by the (pressurized) water system.
- It's the same for emergency=suction_point : it's the point where the water is got, not relative to the hydrant itself which can be meters away.
- It may be good to clarify this particular point in the proposal as to not confuse those 3 different features Fanfouer (talk) 20:52, 11 August 2017 (UTC)
- I never saw it this way, but with emergency=fire_water_pond you are right. Should be the water reservoir itself. For emergency=suction_point I disagree. I my eyes it is not the point where the pipe is going into the reservoir but where the firefighters can attach there fire engine to pump water. So it would be the end of the pipe.
- For me as a firefighter there is a second point why to distinguish between hydrant and other suction points: from a hydrant I get pressurized water so I do not need a fire engine or pump. Whereas at a suction point I've to pump the water from a deeper level. Like the pipe picture in the proposal or the pond picture you added here. Maybe emergency=suction_point is not the best tag for this, but it fits better to something where to suck water from then a hydrant. I agree in this point with where a hydrant is a thing where to get water from a water main (not a pond or lake or stream).
- The point isn't to make a distinguish at all, but to put this difference in water_source=* and not in the emergency=* key.
- Since the object name can't handle all the criterias as to not be too extended, we choose to put this information in another key.
- A suction point is a place, and fire hydrant is a kind of device which may or not be present at suction points to end the pipe. Then it will be useful to have both on the map, depending of what anyone is looking for.
- If we differentiate network fed hydrants and other sources in the emergency=* key, there will be a lot of errors I think since they can share the same appearance.
- Finally, there are about 4k of suction_points and 150k of emergency=fire_hydrant + a source indication Fanfouer (talk) 15:31, 12 August 2017 (UTC)
- Just a question which comes into my mind when thinking about your argument that emergency=suction_point is the point where the water is got: how figure out where exactly the end of the pipe goes into the water? It can be really hard to determine.
- The other question is: which value does this information have for the emergency services? The firefighters are only interested in the end of the pipe where to connect their pump.
- Because there is a third thing to distinguish: not a real suction_point an no hydrant but a well to get ground water from, I started a proposal for a fire water well: --MoritzM (talk) 09:47, 12 August 2017 (UTC)
- Not only firefighters are interested but public planners, biologists, fishermen and so on to know where an amount of water can occasionally be took.
- That's why I'm not so keen on emergency=* tag because such infrastructure can also be a concern for a large amount of people.
- And that's why we should map fire hydrants with sources and places for firefighters and water intakes for other people. Fanfouer (talk) 15:31, 12 August 2017 (UTC)
- I agree. What about giving the real position of the suction point the tag emergency=suction_point and the other end in the water another tag like suction_point=water_intake? --MoritzM (talk) 07:32, 14 August 2017 (UTC)
- emergency=fire_water_pond is an existing tag intended for the pond itself
- emergency=suction_point is an existing tag for the point where fire engine can be connected to suck water. We can eventually tag the other end in the water with suction_point=water_intake
- With this in mind, we don't need water_source=*, because an hydrant, according to me, to firefighters and to is always an outlet from a pressurized water main (public, private, small or large, it doesn't matter, the point is that it's pressurized). Using emergency=fire_hydrant + water_source=stream/pond is a potential cause of problems because any data user that does not parse the water_source=* key, can't distinguish between pressurized hydrants and not pressurized suction points. So any hydrants that you think it needs water_source=stream/pond in reality is not an hydrant, but a suction point and it should be tagged with emergency=suction_point.
- For water wells, if they do not contain a pump, they are suction points and should be tagged emergency=suction_point. If they contain a pump that supplies pressurized water, they are fire hydrants and should be tagged emergency=fire_hydrant. Other tags like fire_hydrant:pressure=*, fire_hydrant:flow_capacity=*, fire_hydrant:couplings_type=* will describe them better. If we want we can refine emergency=suction_point to describe the water level below the ground, the water source (stream/pond/well) and so on. --Viking81 (talk) 23:14, 15 August 2017 (UTC)
- Nope, even if wells provide pressurized water, they are still wells. Fire hydrants are by definition connected to the water main, not to the ground water. Thus water wells with pump should also be emergency=suction_point with additional tags i.e. suction_point:type=fire_water_well fire_water_well:type=electric_pump--MoritzM (talk) 14:18, 16 August 2017 (UTC)
- Not exactly. Hydrants that serve factories or shopping centers often are not connected to the public water main: instead they are connected to a private local water network fed by pumps that suck water from the ground. So these local water networks and their hydrants are fed by what we can call water wells.
The fundamental distinction for firefighters is: hydrants provide pressurized water; suction points provide water but you need your pump and equipment to suck it out. --Viking81 (talk) 22:09, 16 August 2017 (UTC)
- Suction points aren't devices like hydrants but are places. In France we totally have hydrants without any pressure but they look like pressurized hydrants except they are painted in blue. Then we can't arrange hydrants and suction points in the same category since one is an equipment/device and the second is a place and they can fit together.
- fire_hydrant:pressure=* may be renamed in a simpler pressure=* and take the value 0 when hydrant isn't pressurized.
- You get the same taxonomy with benches vs parks : a park may contains benches but benches can be seen in a ton of different places. (park is the suction point and bench is hydrants inside) Fanfouer (talk) 23:11, 16 August 2017 (UTC)
- Suction points aren't devices like hydrants but are places: is this the common accepted definition? Then it must be written in the suction point page.
- In this case, emergency=fire_hydrant will include all devices to whom firefighters can connect, pressurized or not. And the only way to distinguish between them is fire_hydrant:pressure=*. Is this right? --Viking81 (talk) 08:57, 17 August 2017 (UTC)
After a long discussion in mailing list, we go back to this solution:
emergency=fire_hydrant will include all devices to whom firefighters can connect, pressurized or not. fire_hydrant:pressure=* will be used to distinguish pressurized or not.
emergency=suction_point is a place near where you can park the fire engine and you put down your pump and hoses to suck water from a not pressurized water reserve.
water_source=* is again useful both for emergency=fire_hydrant and for emergency=suction_point.
--Viking81 (talk) 10:22, 3 September 2017 (UTC)
Flow and pipes capacity
After a discussion on French mailing list regarding fire_hydrant:flow_capacity=*, we'd may be better using capacity=* namespace.
We could consider 2 keys :
- capacity:flow=* to know how much water the hydrant can provide during a given time range (l/m, m3/s, m3/h and so on)
- capacity:pipes=* to know how much outlets the hydrant have to connect pipes.
This tagging schema can also serve the adjacent Dry riser inlet proposal.
Not to mention capacity:flow=* will be usefull for drains, culverts, pipelines (instead of simple capacity=*).
How do you feel about that ? Fanfouer (talk) 11:44, 23 August 2017 (UTC)
- 2 Keys are Ok for me rather than fire_hydrant:flow_capacity=* which is really too specific and can't be used outside the context of fire_hydrants Gendy54 (talk) 00:00, 24 August 2017 (UTC)
- Currently in the tagging mailing list we are going towards flow_rate=*, that is probably the most correct English term.
- For the number of connections, in the proposal there is already fire_hydrant:couplings_size=* that has a certain consensus and that will list all diameters separated by semicolons (i.e. fire_hydrant:couplings_size=45;70;70). This because very often there are couplings with different diameters and only the number of them isn't enough. Anyway we can change fire_hydrant:couplings_size=* => couplings_diameters=* --Viking81 (talk) 19:25, 28 August 2017 (UTC)
- flow_rate=* and capacity:flow=* may give two different information : firefighters look for minimum flow rate and capacity is the maximum reachable flow, don't you ?
- Regarding couplings, a semicolon separated list is hard to process. I have two ideas to improve the description a bit :
- - Add manufacturer=* and model=* to tagging as to retreive information from manufacturers' documentation. In France we have only 3 or 4 different companies building hydrants.
- - Consider introduction of couplings:big=*, couplings:big:diameter=*, couplings:medium=*, couplings:medium:diameter=* and couplings:small=*, couplings:small:diameter=* because you'll only have 2 or 3 different coupling size/types on a given hydrant.
- Let me know how do you feel about that, this is what is intended on Power transformers and windings=* key Fanfouer (talk) 23:00, 30 August 2017 (UTC)
- - flow_rate=* and capacity:flow=*: -1 It is misleading to have two similar tags. Only one for me as mapper and as firefighter is enough. Normally water companies declare only one value, and it should be the nominal flow rate. Nominal flow rate is enough for firefightening purposes. Let's try to keep this proposal as simpler as possible.
- - manufacturer=* and model=*: I think that this is micromapping.
- - couplings:big:diameter=*, etc: -1 It is too complicated. We have just removed fire_hydrant: namespace for the sake of simplicity and this goes in the opposite direction. The use of semicolon separated values is commonly accepted in other tags. It isn't hard to handle if it is well documented on the wiki. --Viking81 (talk) 20:36, 31 August 2017 (UTC)
- Don't get me wrong, knowing model + manufacturer allows us to get other information like couplings in public documentation instead of openning each hydrant. Semicolon values go messy along time because anyone will arrange the list on its own understanding. Values like couplings_diameters=50;50;200 will match OAPI statement ["couplings_diameters"~"20"] (should I write "(^|;)20(;|$)"?). Then I know the solution with couplings:big:*=* wasn't the best one, but here namespace is used to differentiate part of the feature (and serve as a guide to mappers) and won't be used to group keys in a restricting category which is actually different. Removing fire_hydrant: namespace was a really good choice but it doesn't prevent us to use this tool when appropriate.
- Won't you appreciate a solution to give standalone values regarding couplings instead of lists ? Fanfouer (talk) 21:01, 31 August 2017 (UTC)
- I continue to prefer a list. Your solution with couplings:big:*=* would introduce additional new tags and namespaces. To bypass the problem of "looking for 20 and finding 200", we can make compulsory the units of measure. Moreover, millimeters or inches aren't standard units in I.S. (because standard is meter), so it makes sense to specify them. It would become couplings_diameters=50 mm;50 mm;200 mm--Viking81 (talk) 22:02, 31 August 2017 (UTC)
- Understood for couplings_diameter=*. Would you mind adding couplings=* to give the total amount of couplings please ? Because you can't filter hydrants with the list length. It will ease QA tools to detect possible mistakes Fanfouer (talk) 21:15, 4 September 2017 (UTC)
- Ok updated with couplings=*, manufacturer=* and model=*. --Viking81 (talk) 22:51, 5 September 2017 (UTC)
Pipe Diameter
In Australia, New South Wales, the pipe diameter is stated in mm rather than flow rate. Most urban hydrants are underground and have above ground signs with the information on them, see Warin61 (talk) 00:27, 1 September 2017 (UTC)
- Ok, currently pipe diameter will go in fire_hydrant:diameter=# --Viking81 (talk) 20:58, 1 September 2017 (UTC)
Bonnet
We talk about the color of the bonnet. Can not we also put bonnet=yes/no or fire_hydrant:bonnet=yes/no because some fire_hydrants have bonnet and others not. He seems to be an interesting infomation. Gendy54 (talk) 23:36, 23 August 2017 (UTC)
- I think that this is micromapping. As a firefighter, I don't have the necessity to have this information. Personally, I wouldn't add another tag. --Viking81 (talk) 19:19, 31 August 2017 (UTC)
- Moreover, every pillar type hydrant has a sort of bonnet, intended as upper part of the hydrant. But only some hydrants have it of a different colour, and that's the reason of bonnet:colour=*. --Viking81 (talk) 22:51, 5 September 2017 (UTC)
Remove fire_hydrant: namespace from keys
As discussed on @tagging ML, several users think it's better to remove fire_hydrant: and suction_point: namespaces from keys name.
Here is a list of what is wished :
- fire_hydrant:type=* => fire_hydrant=*
- fire_hydrant:wrench=* => wrench=*
- fire_hydrant:pressure=* => pressure=*
- fire_hydrant:flow_capacity=* => capacity:flow=* (see more in dedicated chapter)
- fire_hydrant:couplings_type=* => couplings_type=*
- fire_hydrant:couplings_size=* => couplings_size=*
Many of those keys aren't special to fire hydrants and may be useful to other objects.Fanfouer (talk) 15:34, 25 August 2017 (UTC)
- OK. But currently in ML we are going towards fire_hydrant:flow_capacity=* => flow_rate=*. And for fire_hydrant:couplings_size=* maybe it's more clear couplings_diameters=*. I'll update the proposal.--Viking81 (talk) 19:30, 28 August 2017 (UTC)
- Really nice of you to take care of this. It looks really great, thank you Fanfouer (talk) 09:23, 30 August 2017 (UTC)
- Some of these tags are really established and used thousand and even tens of thousands of times. Who is going to change this established scheme, all of the existing nodes, and convinces the >1000 mappers who used these tags so far to change to your scheme? --Mueschel (talk) 11:27, 11 September 2017 (UTC)
- This is long term change. We aim to provide a more versatile and universal tagging scheme for sake of simplicity. Wiki, tools and renders will progressively be encouraged to use the new tags. It's the only way to make things better. Using universal tagging prior to specific keys should always be a goal. Fanfouer (talk) 12:34, 11 September 2017 (UTC)
- I only see advantages of using a dedicated namespace. Likewise, what is the purpose of replacing fire_hydrant:type by fire_hydrant and fire_hydrant:diameter by diameter on 600.000 and 400.000 objects *by hand*? If it has a real advantage, I'm fine with such changes, but not in a case like this where it just looks nicer at best. --Mueschel (talk) 12:54, 11 September 2017 (UTC)
- Using a dedicated namespace prevent other part of comunity to use in other fields of knowledge the work you've done. It's more efficient to collaborate on the diameter=* formalism than on dedicated keys. It's a huge advantage since we're all part of the same community.
- Furthermore, :type subkeys are semantically meaningless and encourage a mess in possible types list like we saw with pond case in the middle of physical appearance related values. Such subkeys encourage the definition of other subkeys aside while it's not needed. We only propose to replace values by hand because automatic edits aren't possible and shouldn't be done since it's a unique occasion to check if data is right. Be sure QA tools, editors, validators, checks, crowdsourcing apps will help to do this change.
- It has already be successfully done with power=sub_station to more correct power=substation. Even if the old tag hasn't disappeared from DB yet, the new one had been adopted more widely and faster than the old one. Having a simpler and more meaningful tagging scheme encourage people to map, that's all Fanfouer (talk) 13:10, 11 September 2017 (UTC)
- What is the plan with all the other tags existing in the fire_hydrant namespace? If some are moved to the new keys and others stay in the existing namespace it will be a mess. There should be a list with all the replacements. --Mueschel (talk) 18:42, 11 September 2017 (UTC)
- There is a list here :
- Only reviewed/approved keys are planed to be replaced. Others aren't covered by this proposal and can find an transparent equivalent (fire_hydrant:operator=* => operator=*)
- Do you see any miss in the list ? Fanfouer (talk) 19:22, 11 September 2017 (UTC)
- Several... fh:couplings (which is _not_ replaced by couplings). fh:street, fh:city, fh:housenumber (which is not a case of addr:*). There are 73 subkeys in use according to Taginfo. --Mueschel (talk)
- I can't find any description of those keys. That's why this proposal deals with reviewed keys only. Look, fire_hydrant:street=* redirects to a page... without any mention of fire_hydrant:street :(
- Can you give us more information please ? Fanfouer (talk) 20:24, 11 September 2017 (UTC)
- I don't know details, but these tags are in use thousands of times and should not be ignored by any new tagging scheme. A mixture of old and new is the worst thing that could happen because it's extremely confusing. --Mueschel (talk) 23:26, 11 September 2017 (UTC)
Since we removed fire_hydrant: prefix, now we can use : for other purposes.
So in accordance with other tags (like cap:colour=*, bonnet:colour=*, survey:date=*):
couplings_type=* becomes couplings:type=*
couplings_diameters=* becomes couplings:diameters=*
--Viking81 (talk) 12:44, 18 September 2017 (UTC)
survey_date <> survey:date
instead of survey_date, it would be better to use the more common tag survey:date=*
request for a split
we are adding more and more tags... maybe it's time to split the proposal in 2, one with :
change fire_hydrant:position=* to location=*
change fire_hydrant:type=* to fire_hydrant=*
change in_service=yes/no to disused:emergency=fire_hydrant
pillar:type=*
fire_hydrant:class=*
flow_rate=*
water_source=*
pressure=*
survey:date=*
colour=*
bonnet:colour=*
cap:colour=*
reflective:colour=*
couplings=*
couplings_type=*
wrench=*
other tags that are recent or for which it is doubt should be put in a separate proposal in order to allow to discuss it without blocking the first modifications. it is already a HUGE proposal that affect nearly all hydrant. IMHO we need to validate a first release, allowing app/tools to be updated (with a transition phase requiring management of old and new tags)
- As soon as possible we will split the proposal, so we can vote the approved tags. --Viking81 (talk) 21:57, 13 September 2017 (UTC)
- Proposal splitted: new tags in Fire Hydrant Extensions (part 3). --Viking81 (talk) 17:08, 16 September 2017 (UTC)
"gallon" is ambiguous
I mentioned this on the voting page: "Gallon" is an ambiguous unit in English! A US and UK gallon are very different. 1 gallon is 1.2 US gallons. There's about 20% in it!? (In keeping with OSM usage, I am using British English, hence "gallon" = "uk gallon" = 4.546090 litres. A US gallon is 3.785412 litres.)
If the specificiation was left like this, then someone should, using OSM convention, interpret "gpm" as "gallons per minute", and go around and change all the hydrants in the USA to use (UK) gallons. I suspect that's not what you want.
I suggest using
usgalpm or
usgal/min or similar to make this clear and unambiguous.
Rorym (talk) 11:18, 13 October 2017 (UTC)
- Sure, we can develop your idea. Simply we didn't think to it. But if you and many other vote against this proposal, it will be stopped together with all other improvements. Please be cooperative, not obstructionist. --Viking81 (talk) 20:26, 13 October 2017 (UTC)
- "Simply we didn't think to it." Perfectly understandable.
- Your proposal is not up to standards and shouldn't be approved in it's current form. So please don't call those cooperating with constructive criticism obstructionist. --De vries (talk) 14:55, 16 October 2017 (UTC)
- Why not use liter instead of gallons. Liter is an ISO derivative unit of measure. The ISO best unit of measure is cubic meter per second but I think it's too big. Liter per minutes is acceptable. A table for conversion in gallons (US and/or UK) could be inserted into wiki page.--Miox (talk) 06:29, 17 October 2017 (UTC)
- Because in USA someone could use gallons anyway. So it is better to define them from now. usgal/min is used in Fire Hydrant Extensions (part 2) --Viking81 (talk) 20:17, 17 October 2017 (UTC) | https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/Fire_Hydrant_Extensions | CC-MAIN-2018-43 | refinedweb | 4,299 | 60.65 |
Jest is a JavaScript testing framework requiring little to no configuration. Here’s a quick post to get you up and running with it.
Setting Up Jest
Are you using create-react-app? If so, Jest is already installed, so you can skip this part. Thanks, vertical integration!
It’s pretty straightforward. Install it via Yarn (or NPM):
yarn add --dev jest
And add a script for it in package.json:
"scripts": { "test": "jest" }
If you wanna use Babel in your tests, just install
babel-jest:
yarn add --dev babel-jest
🐊 Recommended courses ⤵️⚛️ Learn React and React Hooks using a project-based approach
Creating Test Files
Jest will treat any file that ends in
.test.js or
.spec.js as a test file. So, if you have a file called
divide.js, you can put a
divide.test.js next to it in the same directory and Jest will pick it up when it runs.
Jest will also consider files that are in a
__TESTS__ folder to be test files, if you want to keep your tests separate.
Writing Tests
Jest injects a number of globals into test files, so there’s nothing to import other than the stuff you want to test. So, if you have a
divide.js like:
export default function divide(dividend, divisor) { if (divisor === 0) { throw new Error('Division by zero.'); } return dividend / divisor; }
Your
divide.test.js would look something like:
import divide from './divide'; // Describe the test and wrap it in a function. it('divides down to the nearest integer.', () => { const result = divide(5, 2); // Jest uses matchers, like pretty much any other JavaScript testing framework. // They're designed to be easy to get at a glance; // here, you're expecting `result` to be 2.5. expect(result).toBe(2.5); });
From there, just run your script:
yarn run test
And Jest will run your tests.
Coverage
One of the best things about Jest is how easy it is to get code coverage. All you have to do is pass in the
coverage flag and Jest will handle the rest. So, for the example in the last section, just change the command to:
yarn run test -- --coverage
And it'll give you the coverage:
👑 This is just an introduction; there is so much more to Jest. | https://alligator.io/testing/jest-intro/ | CC-MAIN-2019-35 | refinedweb | 383 | 74.59 |
I try to run a script in cucumber on my macbook pro. I stumble upon this:
dyld: lazy symbol binding failed: Symbol not found: _rb_float_new
Referenced from: /Users/josnederlof/.calabash/gems/json-1.8.3/lib/json/ext/parser.bundle
Expected in: flat namespace
I already uninstalled json en after that a install (gem uninstall son and gem install json)
Still got the error.
Please help.
Hi Jos,
Are you using bundler? Are you using calabash-sandbox? What version of Mac/Ruby are you using?
Problem isn't relevant anymore, I am using C# for testing now.
Sorry for my late answer. | https://forums.xamarin.com/discussion/56611/dyld-lazy-symbol-binding-failed-symbol-not-found-rb-float-new | CC-MAIN-2019-22 | refinedweb | 102 | 60.41 |
Going Places
IronRuby on Windows Phone 7
Shay Friedman. For example, this IronRuby code loads the System.Windows.Forms assembly and takes advantage of its classes:
This integration is possible thanks to the Dynamic Language Runtime (DLR), a layer added to the .NET Framework infrastructure to provide common services to dynamic languages written on top of the framework. The DLR is written on top of the CLR and makes it much easier to implement dynamic languages on top of .NET. This is one of the main reasons for the rise of .NET Framework dynamic languages we’ve seen lately, including IronRuby, IronPython, IronJS, Nua, ClojureCLR and others.
Key Features of IronRuby
Ruby is a dynamic language and so is IronRuby. This means there’s no compiler at hand, and most of the operations done during compilation and build time in static languages are done during run time. This behavior provides a variety of features that are difficult or impossible to achieve in most current static languages.
Interoperability with .NET Framework Objects The Ruby language has various implementations: MRI (which is the original one), JRuby, Rubinius, MacRub, IronRuby and others. What makes IronRuby stand out from the crowd is its ability to conveniently interact with .NET Framework objects. That interoperability goes both ways—.NET Framework objects are available from IronRuby code and IronRuby objects are available from .NET Framework code.
Dynamic Typing IronRuby variable types are calculated during run time, so there’s no need to specify the types in your code. However, that doesn’t mean that IronRuby doesn’t have types. It does, and every type has its own rules, just like types in static languages. This code sample demonstrates the dynamic typing mechanism in a few simple steps:
# Declaring a numeric variable a = 1 # The variable is of a numeric type # and therefore numeric operations are available a = a * 2 + 8 / 4 # The next line will raise an exception # because it is not possible to add a string to a number a = a + "hello" # However, the next line is entirely legit and will result # in changing the variable type to String a = "Hello"
The Interactive Console Similar to the Windows command prompt, the interactive console is an application that retrieves IronRuby code and immediately executes it. The execution flow is also known as Read-Evaluate-Print-Loop (REPL). You can define variables, methods and even classes, load IronRuby files or .NET Framework assemblies and use them instantly. For example, Figure 1 shows a simple console session that creates a class and immediately uses it.
Figure 1 Using the IronRuby Console
Duck Typing IronRuby is an object-oriented language. It supports classes, inheritance, encapsulation and access control, like you’d expect from an object-oriented language. However, it doesn’t support interfaces or abstract classes, like many static languages do.
This isn’t a flaw in the language design, though. With dynamic typing, declaring code contracts such as interfaces or abstract classes becomes redundant. The only thing that matters about an object is whether it defines a specific method or not, and there’s no need to mark it when it does. This is known as duck typing—if it quacks like a duck and it swims like a duck, it’s a duck, and there’s no need to stamp it to consider it as a duck.
For example, the code sample in Figure 2 contains two classes with a method named say_hi and another general method named introduce that retrieves an object and executes its say_hi method.(Notice the absence of interfaces or other marking mechanisms.)
Figure 2 An Example of Duck Typing
Metaprogramming IronRuby comes with powerful metaprogramming capabilities. Metaprogramming is a way to add, change and even remove methods during run time. For example, it’s possible to add methods to a class, write methods that define other methods or remove method definitions from an existing class. Figure 3 adds a method to a class that’s reflected to all current and future instances of that class.
Figure 3 Adding a Method to a Class After It Has Been Declared
# Creating a class with no methods class Demo end # Creating an instance of class Demo d = Demo.new # Opening the class and adding a new method - hello_world class Demo def hello_world puts "hello world" end end # Using the newly added method on the class instance d.hello_world # prints "hello world"
Moreover, there are special methods that can be used to catch calls to undefined methods or constants. Using these methods makes it easy to support dynamic method names such as find_by_[column name] where [column name] can be replaced with any value such as find_by_name, find_by_city or find_by_zipcode.
RubyGems The Ruby language, as powerful as it is, wouldn’t have become such a huge success without the external libraries that can be installed and used with it.
The main method of installing Ruby libraries is via the RubyGems system. It’s a package manager that helps distribute and install Ruby libraries, which are called gems. There are thousands of free gems available, covering almost every programming aspect and task, including testing frameworks, tax calculation libraries, Web development frameworks and more.
You should be aware that some RubyGems depend on C libraries. These gems can’t run on the current version of IronRuby unless the C libraries are ported to plain Ruby or to C#.
The Community One of the best things about IronRuby is that you get access to the Ruby community. This includes valuable content in dozens of forums, mailing lists, chat rooms and blogs provided by people who are willing to help with any question. Don’t hesitate to take advantage of these resources; they’re extremely useful.
IronRuby and Silverlight
Silverlight 2 introduced a new and important feature: support for DLR languages. As a result, developers can use IronRuby with Silverlight applications, from incorporating it in the application to writing entire Silverlight applications with it.
But wait, Silverlight is running on Windows Phone 7, right? Exactly.
Windows Phone 7
The next Microsoft mobile platform, Windows Phone 7, is expected by some to become a game-changer in the smartphone industry. Apart from the standard multi-touch capabilities and a shiny new UI, the best news about Windows Phone 7 from a developer’s perspective is that Silverlight is its development platform.
It’s a smart move by Microsoft to make use of a well-established technology, thus enabling a large number of developers to create mobile applications with an easy, almost unnoticeable, learning curve.
Because DLR languages are capable of running within the Silverlight environment, you can take advantage of IronRuby and use it to write Windows Phone 7 applications..
The main missing feature that affects IronRuby is the Reflection.Emit namespace. IronRuby uses this feature to compile code on the fly to make applications run faster. However, it’s only a performance optimization and not a component necessary for running simple scripts and applications.
Another limitation concerns the way new Windows Phone 7 applications are created. Such applications can be created only from Visual Studio and only in C#. This requirement forces developers to write code in C# that initiates the IronRuby code.
The last important limitation is that RubyGems won’t work on Windows Phone 7. Hence, to use a gem, you have to include its code files within the application files and use them as any other IronRuby code files.
Building a Simple IronRuby Application on Windows Phone 7
To start an IronRuby-driven Windows Phone 7 application, you first need to install the Windows Phone 7 Developer Tools, which can be downloaded from developer.windowsphone.com.
After the tools are installed, open Visual Studio and go to File | New | Project. In the New Project dialog select the “Silverlight for Windows Phone” category and then choose the “Windows Phone Application” project template. Name it and continue.
As soon as the new project opens, you’ll notice that a simple XAML file has been created for you. Note that XAML is required for Silverlight in general and isn’t language-dependent. Therefore, even though the application code will be written in IronRuby, you must use XAML to create the UI. In this simple application, the default XAML file is enough, so no changes need to be made here.
The interesting part of this simple application is the code. Before we dive into that, however, we need to add references to the IronRuby and DLR assemblies. These assemblies aren’t the regular ones; we need the Windows Phone 7-ready assemblies, which you can retrieve from ironruby.codeplex.com/releases/view/43540#DownloadId=133276. You’ll find the needed assemblies inside the silverlight/bin folder in the downloaded package.
Next, we need to write the IronRuby code. Add a new text file to the application and name it MainPage.rb. In addition, to ease the deployment to the phone, open the properties of this file and change the “Build Action” property to “Embedded Resource.”
Then paste the code from Figure 4 into the file.
Figure 4 IronRuby Code File to Run on Windows Phone 7
# Include namespaces for ease of use include System::Windows::Media include System::Windows::Controls # Set the titles Phone.find_name("ApplicationTitle").text = "MSDN Magazine" Phone.find_name("PageTitle").text = "IronRuby& WP7" # Create a new text block textBlock = TextBlock.new textBlock.text = "IronRuby is running on Windows Phone 7!" textBlock.foreground = SolidColorBrush.new(Colors.Green) textBlock.font_size = 48 textBlock.text_wrapping = System::Windows::TextWrapping.Wrap # Add the text block to the page Phone.find_name("ContentGrid").children.add(textBlock)
The IronRuby code in Figure 4 is pretty straightforward; we set the titles, create a text block with some text and add it to the page. Note that you can use everything in the Ruby language (not done here), such as classes, metaprogramming and libraries, with the aforementioned limitations of running within the Windows Phone environment.
Now all that’s left is to actually execute the IronRuby code. To do so when the application loads, the code from Figure 5 should be added to the MainPage class constructor, which is located inside the MainPage.xaml.cs file.
Figure 5 Adding Code to Execute IronRuby Code from the Class Constructor
// Allow both portrait and landscape orientations SupportedOrientations = SupportedPageOrientation.PortraitOrLandscape; // Create an IronRuby engine and prevent compilation ScriptEngine engine = Ruby.CreateEngine(); // Load the System.Windows.Media assembly to the IronRuby context engine.Runtime.LoadAssembly(typeof(Color).Assembly); // Add a global constant named Phone, which will allow access to this class engine.Runtime.Globals.SetVariable("Phone", this); // Read the IronRuby code Assembly execAssembly = Assembly.GetExecutingAssembly(); Stream codeFile = execAssembly.GetManifestResourceStream("SampleWPApp.MainPage.rb"); string code = new StreamReader(codeFile).ReadToEnd(); // Execute the IronRuby code engine.Execute(code);
The code in Figure 5 is fairly short and gracefully demonstrates how easy it is to run IronRuby code from C# code.
In addition, make sure to add these using statements to the class:
The third line of code in Figure 5 loads the System.Windows.Media assembly into the IronRuby context, which enables the code to interoperate with this assembly’s classes and enums.
The next line allows the IronRuby code to access the current Silverlight page. This line exposes the current instance (this) to the IronRuby code via a constant named Phone.
The rest of the code reads the IronRuby code from the embedded file (note that the application namespace should be added to the file name, so MainPage.rb becomes SampleWPApp.MainPage.rb) and then executes it using the engine instance.
And that’s it. We’ve created an application that, once loaded, runs IronRuby, which, in turn, changes the titles and adds a text block to the Silverlight page. All that’s left is to run the application, and the result is shown in Figure 6.
Figure 6 An IronRuby-Driven Application Running on Windows Phone 7
Getting Better All the Time
Even though the workflow isn’t perfect when using IronRuby on Windows Phone 7, and you need to keep the various limitations in mind, this is only the beginning. The IronRuby and Windows Phone 7 platforms are both new and they’re getting better all the time.
This combination opens up many possibilities, to both .NET Framework developers and Ruby developers. Now, .NET developers can take advantage of the incredible power of the Ruby language when writing Windows Phone 7 applications, such as incorporating an IronRuby console into their apps or providing extensibility capabilities. And Ruby developers, on the other end, can—for the first time—write mobile applications using their language.
This is, without a doubt, the dawn of a brave new world with a lot of opportunities and possibilities. And it’s all in the palm of your hands.
Shay Friedman is a Microsoft Visual C#/IronRuby MVP and the author of IronRuby Unleashed (Sams, 2010). He’s working as a dynamic languages leader in Sela Group where he consults and conducts courses around the world. Read his blog at IronShay.com.
Thanks to the following technical expert for reviewing this article: Tomas Matousek
MSDN Magazine Blog
More MSDN Magazine Blog entries >
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/en-us/magazine/ff960707.aspx | CC-MAIN-2015-32 | refinedweb | 2,212 | 54.22 |
This tutorial shows how to get the file attributes for a given file in Java with examples. It uses the
Path and
Files classes from the Java NIO API to fetch the attribute information of a file from the underlying file system.
The file attributes which can be read for a file are whether it is readable, writable and/or executable. Using Java NIO API the file attributes can be accessed in the following 2 steps –
- An instance of
java.nio.files.Pathneeds to be created using the actual path to the file in the file system.
- Then using
java.nio.file.Filesclass the attributes can be read as
booleanvalues using the following methods with the
Pathinstance created in step 1 passed as a parameter to the method –
FileSystem.isReadable()– checks whether file is readable
FileSystem.isWritable()– checks whether file is writable
FileSystem.isExecutable()– checks whether file is executable
Let us now see a Java code example showing how to retrieve the attributes of a given file, which is followed by an explanation of the code.
OUTPUT of the above code
Java NIO-based code example to get permissions for a file
package com.javabrahman.corejava; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; public class CheckFileAttributes { public static void main(String args[]) { Path filePath = Paths.get("C:\\JavaBrahman\\LEVEL1\\file1.txt"); //Is file readable boolean isReadable = Files.isReadable(filePath); System.out.println("Is file readable: " + isReadable); //Is file writable boolean isWritable = Files.isWritable(filePath); System.out.println("Is file writable: " + isWritable); //Is file executable boolean isExecutable = Files.isExecutable(filePath); System.out.println("Is file executable: " + isExecutable); } }
Explanation of the code
Is file readable: true
Is file writable: true
Is file executable: true
Is file writable: true
Is file executable: true
- The class
CheckFileAttributesfetches the file attributes or permissions for a file named
file1.txt.
- It first creates a
Pathinstance, named
filePath, using the full path of the file(
"C:\\JavaBrahman\\LEVEL1\\file1.txt") in the file system.(The double backward slashes(‘\\’) are to escape the single backward slash(‘\’) in the
Stringpath value on the Windows file system.)
filePathis then passed as a parameter to three attribute checking methods –
Files.isReadable(),
Files.isWritable()and
Files.isExecutable().
- The printed output shows that
file1.txtis readable, writable, and executable, as the value returned by all three methods is
true.
| https://www.javabrahman.com/quick-tips/how-to-get-file-attributes-or-permissions-in-java-with-examples/ | CC-MAIN-2020-16 | refinedweb | 394 | 57.67 |
The XPathResult2 interface represents the result of the evaluation of an XPath 2.0 expression within the context of a particular node. More...
#include <XPath2Result.hpp>
The XPathResult2 interface represents the result of the evaluation of an XPath 2.0 expression within the context of a particular node.
Since evaluation of an XPath 2.0 expression can result in various result types, this object makes it possible to discover and manipulate the type and value of the result.
FIRST_RESULT
The result is a sequence as defined by XPath 2.0 and will be accessed as a single current value or there will be no current value if the sequence is empty. Document modification does not invalidate the value, but may mean that the result no longer corresponds to the current document. This is a convenience that permits optimization since the implementation can stop once the first item in the resulting sequence has been found. If there is more than one item in the actual result, the single item returned might not be the first in document order.
ITERATOR_RESULT
The result is a sequence as defined by XPath 2.0 that will be accessed iteratively. Document modification invalidates the iteration.
SNAPSHOT_RESULT
The result is a sequence as defined by XPath 2.0 that will be accessed as a snapshot list of values. Document modification does not invalidate the snapshot but may mean that reevaluation would not yield the same snapshot and any items in the snapshot may have been altered, moved, or removed from the document.
Destructor.
Conversion of the current result to boolean.
Conversion of the current result to int.
Signifies that the iterator has become invalid.
Retrieve the current node value.
Conversion of the current result to double.
If the native double type of the DOM binding does not directly support the exact IEEE 754 result of the XPath expression, then it is up to the definition of the binding to specify how the XPath number is converted to the native binding number.
Returns the result type of this result.
The number of items in the result snapshot.
Valid values for snapshotItem indices are 0 to snapshotLength-1 inclusive.
Conversion of the current result to string.
Returns the DOM type info of the current result node or value.
Returns true if the result has a current result and the value is a node.
Iterates and returns true if the current result is the next item from the sequence or false if there are no more items.
Called to indicate that this object (and its associated children) is no longer in use and that the implementation may relinquish any resources associated with it and its associated children.
Access to a released object will lead to unexpected result.
Sets the current result to the indexth item in the snapshot collection.
If index is greater than or equal to the number of items in the list, this method returns false. Unlike the iterator result, the snapshot does not become invalid, but may not correspond to the current document if it is mutated. | http://xqilla.sourceforge.net/docs/dom3-api/classXPath2Result.html | CC-MAIN-2015-18 | refinedweb | 510 | 56.96 |
This article is about using GIMP-Python, which is a set of Python modules that allow you to do programming in Python to automate commands in GNU Image Manipulation Program (GIMP). These Python modules are wrappers around the libgimp libraries. GIMP-Python is different from the Script-Fu extensions. In Script-Fu, a plug-in is used to execute scripts. In GIMP-Python, the Python script takes center stage and does the work. You can instantiate the GIMP-Python scripts from inside GIMP itself, or you can use GIMP's batch mode to start it from the command line.
In this article, you learn how to write Python code that allows you to automate two different tasks in GIMP: resizing images and saving them as different formats.
You can install and use both GIMP and Python on many different platforms, including Linux®, Mac OS® X, and Microsoft® Windows®. The cross-platform nature of both GIMP and Python means you can write complex plug-ins for GIMP using Python and be able to run them on a variety of platforms.
Overview of GIMP
GIMP is an open source image manipulation program that many people use as a viable alternative to some of the commercial offerings. GIMP handles complicated features such as layers and paths. GIMP supports a number of different image formats and comes with relatively complex filters. GIMP has strong community support and involvement, so it is usually easy to find information about how to use or extend GIMP.
See Resources for the link to download and install GIMP on your computer.
Overview of Python scripting
Python is an object-oriented scripting language that allows you to write code that runs on many different platforms. Python has been ported to both the .NET and Java™ virtual machines, so there are many different ways that you can execute Python. See Resources to learn how to install Python on your computer.
Many modules exist for Python that provide functionality you can reuse without writing your own (the GIMP-Python modules are an example). An index of the Python modules lists many different pre-built modules that you can use to do a variety of tasks from dealing with Hypertext Markup Language (HTML) and Hypertext Transfer Protocol (HTTP) connections to working with Extensible Markup Language (XML) files (see Resources). You can also build your own Python modules, allowing you to reuse parts of code within your enterprise.
Similar to GIMP, Python also has significant community support. This means that you can find information, as well as download and use relatively mature tools that help you in your Python development.
Before proceeding to the rest of the article, install Python on your
operating system according to the instructions on Python's site. Make sure
you have Python correctly installed by opening a command prompt and typing
python --version. The results should look
something like those in Listing 1.
Listing 1. Verifying the installation of Python
$ python --version Python 2.6.6
After you install the Python interpreter, you can create Python files in any text editor and run them with the interpreter. You can also use the PyDev plug-in for Eclipse, which offers syntax highlighting as well as some other features, such as catching syntax errors for you. Another option is to use the Python console directly in Eclipse, which is convenient for finding help.
The GIMP-Python modules should already be installed with newer versions of GIMP. To see if they are installed, open GIMP and look to see if you have a Python-Fu menu option under the Filters menu. If you see the option there, you are ready to start scripting. If you don't see that option, follow the links in the Resources section to install the Python extensions for GIMP.
If you want to use the PyDev plug-in for Eclipse, follow these steps:
- Install the PyDev plug-in by selecting Help > Install New Software.
- Use as the update site (see Resources).
- Follow the rest of the installation and restart Eclipse when done.
- After restarting Eclipse, select File > New > Project to create a new project.
- Select PyDev\PyDev Project and click Next.
- Enter your project's name (for example, MyPythonGimpPlugins).
- Clear the Use default check box and enter the location of your GIMP directory for Python plug-ins as shown in Figure 1.
Figure 1. Creating a new project with the PyDev plug-in for Eclipse
- Click the link to configure an interpreter. The Auto Config button should work as long as you have Python installed correctly and on your path.
For your project, make sure to add the folder that includes the GIMP
Python modules,
gimp and
gimpfu. Add this directory to your Eclipse
project (but don't add it to the base path) by using Project >
Properties as shown in Figure 2.
Figure 2. Adding the GIMP-Python module directory to your project in Eclipse
Click PyDev - PYTHONPATH. Then select the External Libraries tab and click the Add source folder button to add the folder in which the GIMP Python modules are installed. The path will be something like /usr/lib/gimp/2.0/python/.
You can also run the Python console in Eclipse. With the console viewable, select Pydev Console from the list of consoles.
Registering your script
The Python files go in your user's home GIMP folder. On Mac and Linux systems, that folder is ~/.gimp-2.6/plug-ins. The Python script files should also be executable and have the Python interpreter on the first line, like the standard script declarations, as shown in Listing 2.
Listing 2. An elementary Python script that prints "Hello, world!"
#!/usr/bin/python print "Hello, world!"
You need to register your Python script for GIMP to put the plug-in in one of the GIMP menus. Listing 3 shows the bare minimum script that you need to register a script in GIMP and print "Hello, World!" to the console when it is called.
Listing 3. Registering your plug-in with GIMP
#!()
The
register() method gives GIMP information
about your plug-in.
The
register() method has several paramters
that tell GIMP how to display menu options for the plug-in, and what
Python method to call when you start the plug-in from the menu. Table 1
shows the parameters for the
register() method.
Table 1. The parameters and examples
for the
register() method
You can get the most up-to-date information about the register method's parameters by opening the Python-Fu console (click Filters > Python-Fu > Console) and typing the commands shown in Listing 4.
Listing 4. Getting help using the Python console
import gimpfu help(gimfu.register)
After putting your values in the
method, save your script. Make sure that it is executable and is located
in the .gimp2-6/plug-ins folder.
After you save the script, start GIMP from the command line using the
gimp command. This should allow you to see any
information that is printed by your plug-in, including the output of the
print statement. Also, if you have any errors in your plug-in, you see
them here.
With GIMP started, go to the Image menu where you can see the new Resize to max menu item as shown in Figure 3.
Figure 3. The new menu item for your plug):)
The Python code is simply calling the
pdb.gimp_scale_image method to resize the image
after doing some elementary calculations to find what the values of the
scaled image sizes should be. Because the values put into the box are
maximum values, the script needs to check both the width and height of the
current image to see if the image's dimensions need to be constrained. If
either image dimension is larger than the maximum size, it sets the
constrained dimension to the maximum size and then calculates the other
dimension.
To find out more about other methods you can call inside your Python
script, see the Help > Procedure Browser inside GIMP.
The procedure browser for the
pdb.gimp_image_scale method is shown in Figure
4.
Figure 4. Viewing the gimp-image-scale method in the procedure browser
Running the resize plug-in script
After you add the code to perform the resize, open an image in GIMP. Click your new Image > Resize to max menu item. Your script asks you for the sizes as shown in Figure 5.
Figure 5. The input parameters for your plug-in
When you click OK, your
plugin_main method executes and your script
resizes your image.
Scripting the image transform
Now that you have the plug-in working to resize your image, you can update the Python script to also save the image in a different image format. This allows you to save the original image as a JPEG file as well as resize it to fit within the certain constraints.
The new additions to the script are shown in Listing 6.
Listing 6. Adding code to save a JPEG copy of the original image
#!
The constants used for the parameter input types come from the
gimpfu library. You can get the list of
available constants by typing the commands shown in Listing 7 in the
Python console in either GIMP or Eclipse.
Listing 7. Getting help for the
gimpfu constants
import gimpfu help(gimpfu)
The constants begin with
PF_ and define data
types that you can use for the controls on the input form.
Running the updated plug-in script
After adding the new code to the Python script to save the image as a JPEG, you can execute the plug-in by opening an image in GIMP and using the Image > Resize to max menu item. You see the updated parameter input box as shown in Figure 7.
Figure 7. The updated parameter input
Now that you've made the script and tried it on some images, you can run the plug-in on all of the images in a folder.
Running both on a folder
GIMP has a non-interactive batch mode that allows you to call GIMP commands from the command line. You can use the command-line feature to operate on all images in a folder using standard wildcards.
The method for saving the image as a JPEG, for instance, can be passed directly into GIMP's batch mode by using the command in Listing 8.
Listing 8. Using GIMP's non-interactive batch mode to save the image
gimp -i -b '(file-jpeg-save "Menu_006.png" 200 200 TRUE)' -b '(gimp-quit 0)'
However, this becomes a little more difficult to do when considering the calculations necessary for the size constraints. Therefore, this plug-in greatly simplifies both operations so you can call them from a single GIMP command.
Now that your plug-in is working and is registered in GIMP, the plug-in
has its own command in GIMP's procedure database. You can see the command
for your plug-in by going to the procedure browser (Help >
Procedure Browser in GIMP) and typing the name that you gave
your plug-in. For example, if you named it
python_fu_resize in the register method as
shown back in Listing 6, you will find it in the GIMP procedure browser as
python-fu-resize. You call this command as it's
shown in the GIMP Procedure Browser from the command line using the
gimp command and the
-i
-b flags as shown in Listing 9.
Listing 9. Calling your plug-in from the GIMP non-interactive batch mode
gimp -i -b '(python-fu-resize "myimage.png" 200 200 TRUE)' -b '(gimp-quit 0)'
GIMP opens the image you specified, executes your command using the parameters that you provide, and then quits without saving any modifications made to the original image. By using the GIMP command in the non-interactive batch mode, you can script large-scale modifications to an entire folder full of images.
The command shown in Listing 10 operates your new plug-in's command on all Portable Network Graphics (PNG) images in a folder.
Listing 10. Calling your plug-in from GIMP on all of the images in a folder
gimp -i -b '(python-fu-resize "*.png" 200 200 TRUE)' -b '(gimp-quit 0)'
Summary
Python is an object-oriented scripting language that allows you to write scripts that you can execute on many different platforms, such as Linux, Mac, and Windows. The tool support for the Python scripting language is considerable — from simple syntax highlighting in text editors to Eclipse plug-ins.
GIMP is an application that provides sophisticated editing of graphics files on many different platforms. GIMP supports the notion of plug-ins, which provide extension points that you can use to automate even extremely complex tasks by using scripting. Because GIMP supports Python scripting in plug-ins, you can use Python to extend GIMP. Using the non-interactive batch mode, you can call your plug-ins from the command line in a method suitable for scripting.
Resources
Learn
- Read about the different Python modules available to you.
- Learn more about the Python programming language.
- Visit the developerWorks Open source zone for extensive how-to information, tools, and project updates to help you develop with open source technologies and use them with IBM's products, as well as our most popular articles and tutorials.
- Stay current with developerWorks Technical events and webcasts.
- Watch and learn about IBM and open source technologies and product functions with the no-cost developerWorks On demand demos.
Get products and technologies
- Get the Python interpreter, which allows you to run Python scripts on your computer.
- To develop Python code in an IDE, start with the Eclipse IDE.
- Download the PyDev plug-in for Eclipse to develop Python code in the Eclipse IDE.
- Download GIMP, an open source image manipulation program.
-.
- The Eclipse newsgroups has many resources for people interested in using and extending Eclipse.. | http://www.ibm.com/developerworks/library/os-autogimp/index.html | CC-MAIN-2013-48 | refinedweb | 2,318 | 61.56 |
JDOM: XML Meets Java Meets Open Source.
The API is actually quite simple. The org.jdom.input package handles SAX or DOM input by offering a SAXBuilder and a DOMBuilder class. The org.jdom.output package wraps XMLOutput, DOMOutput, and SAXOutput classes which allow you to export XML as XML, DOM tree or a stream of SAX events respectively. The org.jdom.adapters package contain a series of adapter classes for various XML parsers that can be plugged into JDOM. Finally, the org.jdom package includes the classes representing the familiar XML building blocks: Attribute, CDATA, Comment, DocType, Document, Element, Entity, Namespace, and ProcessingInstruction.
Reading and WritingLet's jump into an example right away. Listing 1 shows an Echo program that parses an XML file and then writes it out to the console.
Listing 1.
import java.io.*; import org.jdom.*; import org.jdom.input.*; import org.jdom.output.*; public class EchoXML { public static void main(String[] args) { try { SAXBuilder saxbuild = new SAXBuilder(); DOMBuilder dombuild = new DOMBuilder(); Document d = saxbuild.build(new File(args[0])); Document d2 = dombuild.build(new File(args[0])); XMLOutputter xmlout = new XMLOutputter("%%%%%"); xmlout.output(d, System.out); System.out.println("\n----------------------------"); xmlout.setTrimText(true); xmlout.setIndent(false); xmlout.output(d2, System.out); } catch (Exception ex) { ex.printStackTrace(); } } }
We used both the SAXBuilder and the DOMBuilder to parse the incoming XML document. If performance is a concern to you, you should use the SAXBuilder. If you need to validate the XML document, you can specify a Boolean "true" as a parameter in the method. After the parsing is done, we produce two outputs. When using XMLOutputter, you can turn indentation on or off, specify a string that will be used for indentation and indicate whether new lines should be inserted or removed. We just used a couple of these methods for demonstration purposes in the above example.
In order to compile and run the above code, you need to download JDOM. At the time of writing of this article the latest version was JDOM Beta 5. After you unzip (or untar) the file, go to the build directory. You will find the jdom.jar file there which needs to be added to your CLASSPATH. You will also see a build.bat (or build.sh) file. By running this file, you can rebuild the jdom.jar file. You should also build the API documentation files by typing
build javadoc
at the command prompt. With the jdom.jar file in your CLASSPATH and the documentation at hand, you are ready to use JDOM. Go ahead and compile the above the program and run it with a sample XML file.
Data ExtractionReading and writing is easy. Most of the challenge comes when programs need to manipulate XML data. JDOM addresses this issue by providing an object representation of the XML document which act as your interface to the information contained in the XML document as well as its structure. To demonstrate some of the capabilities, assume that we have a file containing some customer information as shown in Listing 2.
Listing 2.
1234 Long Street1234 Long Street John Doe Chicago IL 87654
We also have a number of files each containing information about a product that we sell. These files are numbered item1.xml, item2.xml, item3.xml, etc. Here is an example:
-
Telephone 49.99 TEL-8760 1
We want to write a program that will read all the "item" files specified on the command line, determine which ones are in-stock, and then output a new XML file containing the order, which includes the customer information plus the items that will be shipped. This requires the program to read and parse various XML files, extract information from them and then put elements from different files together to create a new XML output. Listing 3 shows one way this can be done using the JDOM API.
Listing 3.
import java.io.*; import org.jdom.*; import org.jdom.input.*; import org.jdom.output.*; public class Order { public static void main(String[] args) { try { SAXBuilder builder = new SAXBuilder(); Document customerdoc = builder.build("customer.xml"); Element customer = customerdoc.getRootElement(); Element fname = (Element) customer.getChild("name").getChild("first").clone(); Element lname = (Element) customer.getChild("name").getChild("last").clone(); Element address = (Element) customer.getChild("address").clone(); Element city = (Element) customer.getChild("city").clone(); Element state = (Element) customer.getChild("state").clone(); Element zipcode = (Element) customer.getChild("zipcode").clone(); Element order = new Element("order"); order.addContent(fname); order.addContent(lname); order.addContent(address); order.addContent(city); order.addContent(state); order.addContent(zipcode); Document orderdoc = new Document(order); orderdoc.setRootElement(order); Document itemdoc; String instock; Element item; for (int x=0; x
After we parse the customer.xml file, we use the method to grab the root element of the document. This is stored in the variable customer. We can then use the method to navigate through the document and retrieve other elements of the file. We are using the method to do a deep copy of the extracted elements because we want to then append these to our output file. In other words, we don't want the elements to maintain their original relationships to their parents as we transfer them to the new file. Once we have the various elements from the customer.xml as independent "Element" objects, we create a new Element called "order" which is going to be the root element of our output document. Using the method, we add some elements to the "order" and build our hierarchy using familiar Java constructs.
Page 1 of 2
Comment and ContributeComment and Contribute | http://www.developer.com/tech/article.php/630951/JDOM-XML-Meets-Java-Meets-Open-Source.htm | CC-MAIN-2014-15 | refinedweb | 930 | 57.67 |
So, a while ago, one of the other writers here wrote a small tutorial on
parsing simple CSV files in
C#. It mostly just
showed off the string
split method, and only worked on really simple
CSV files - no quoted fields, etc. Well, we got a comment asking about
that, so today I sat down thinking I would write up a more robust
parser. But as I read through the
RFC that describes CSV files,
I thought to myself, am I suffering from NIH (not invented here)
syndrome? Do I really need to write a full CSV parser?
And, as you might expect, the answer is no. Not only did I not need to write a parser, I found that there is one that is built into OLEDB subsystem of Windows! And it actually takes fewer lines of code to use this built in parser than it does to do the simple string split algorithm that was in the previous tutorial (and it feels a lot nicer too). I was originally planning on this being a decently long tutorial, when I thought I would write my own parser - but you are actually in for a really short and simple one today. Less for me to write, less for you to read, and more functionality to boot!
So, without further ado, I think we can jump straight into the code:
using System; using System.Data; using System.IO; //not used by default using System.Data.OleDb; //not used by default namespace CSVParserExample { class CSVParser { public static DataTable ParseCSV(string path) { if (!File.Exists(path)) return null; string full = Path.GetFullPath(path);; /(); return dTable; } } }
So, first off, you will need to add the namespaces
System.IO and
System.Data.OleDb. The first we need because we will be doing some
path manipulation, and the second gives us access to what we will need
to do CSV parsing. I've created a nice static function here that takes a
path to a CSV file and returns the contents of the file in a DataTable
(which is really easy to view and manipulate).
The weird thing about all of this is that the CSV file gets treated as a
database table. We need to create a connection string with the Jet OLEDB
provider, and you set the Data Source to be the directory that contains
the CSV file. Under extended properties, 'text' means that we are
parsing a text file (as opposed to, say, an Excel file), the
HDR
property can be set to 'Yes' (the first row in the CSV files is header
information) or 'No' (there is no header row), and setting the
FMT
property set to 'Delimited' essentially says that we will be working
with a comma separated value file. You can also set
FMT to
'FixedLength', for files where the fields are fixed length - but that
wouldn't be a CSV file anymore, would it?
The next part to do is create the actual query. In this case, we want everything, so we have a "SELECT *". What are we selecting from? Well, in this somewhat twisted worldview, the directory is the database, so the file is the table we are selecting from.
Now we are into normal OLEDB territory - we create a
DataTable that we
will be filling with results, and we create a
OleDbDataAdapter to
actually execute the query. Then (inside of a try block, because it can
throw an exception) we fill the data table with the results of the
query. Afterwords, we clean up after ourselves by disposing the
OleDbDataAdapter, and we return the now filled data table.
And using the now filled data table is extremely simple - we actually talk about it here and here.
And there you go! I'm not sure how parsing CSV could be much easier. If you would like the Visual Studio project I used to test all this, you can grab it here.
Source Files:
Hi there!
I've downloaded the source, compiled it but nothing happens? No error, no data in the datagridview, nada.
What am i doing wrong?
For the issue gbroche, GKED and CraigB were discussing, I solved it this way. Notice how in CSVParserExample, TestParse.csv is in the root directory of the project. You can add a new CSV file here, too, using Solution Explorer. Once you do that, check the Properties of the CSV file and make sure the "Copy to Output Directory" attribute is set to "Copy if newer". (By default, it is set to "Do not copy" which won't work.) Once that is properly set, it should work when you build the project again.
Hi Michael, sorry to bother you on the same issue. I have the TestParse.csv and the compiled app in the same directory but still no results of whatsoever nature. No errors, no results, nothing. When I run it I get simply a form with an empty datagridview. The file property are set to "Copy if newer" and I am using the original file you added in the project.
I get a alarm like this? What did I wrong?
The type or namespace name 'DataTable' could not be found (are you missing a using directive or an assembly reference?)
Do you have
at the top of your file?
For gbroche and GKED, I had the same issue using the code above, and it turned out that there was an error - just no message (i.e., "No file found."). The .CSV file was in the wrong directory - it needs to be where the executable file is.
Craig
Hello. I'm curious, will this method take into account of commas in a piece of string? You can have a cell that has has: "Lower, Below, Under". Those are common words, but separated by commas, will this take into account as one string?
I used above code to read data from Excel it runs fine. But I have invalid data in file like I have typed dataset and I wrote double to string type data in some columns(in Excel file) but no Exception generated after reading. I need to generate Exception. Can any one suggest.
@ gbroche I am having the same issue. Do you know what might be wrong with it? Thank you.
This has been a very educational code set. As I need to create a pre-processor to read the CSV file and manipulate some fields of data and write back out to a TXT file. When I run this code and substitute in my CSV file or use your sample CSV file I get no errors but the Form displays empty. Any thoughts on what I should be looking at?
Thanks
Why the date column title get empty for this code?
I have a file with the title as "Modified date" and that have a content like "10-06-11 9:44"
as I am look in to my datatable after parse function the title is empty just like "".
any idea.?
thanks Chetan
Hi i am using the above code for getting csv file to datset. BUT In that dataset, the firstrow, firstcolumn is concatinated with the an unknowncode i.e. is "". how to remove the uncoded for fistcolumn in the datset. can you plz help.
I had the same problem and discovered you need to specify a CharacterSet in the Extended Properties. The value can be ANSI, UNICODE or a numeric code page (e.g. 65001 for UTF-8):
"Extended Properties=\"text;CharacterSet=65001;HDR=No;FMT=Delimited\""
Maybe a space before and after the file variable?
Any idea how to use WHERE clause in SQL statement. For Example, if I have a date column in csv and I want only records from certain dates I tried this condition and it does not work
string query = "SELECT * FROM " + file + " WHERE [Date Started]>='03/18/2011';
It gives me an exception when attemption dAdapter.Fill
This is cool, my only problem is my column names show up as F0,F1,F2 etc..Any idea how to get the column names? They're defined in my first row
Please ignore.. found it myself.. had to set HDR=Yes in the extended props.. Sorry! Noob in the building :)
Hi,
I tried using the code but it won't delimit. My delimiter is an equal sign (=), and i tried to put it insted of the standard delimiter, but it won't work...
code: ..... +"Extended Properties=\"text;HDR=No;FMT=Delimited(=)\"";
What am I doing wrong?
Thanks,
Great article, thanks.
How would you go about modifying this code to work with just a simple string holding the CSV data, as opposed to an actual file?
SELECT AS does not work. IE> can't rename the columns
Does this parser work for files delimited in other ways? Tab, for example?
Nope it doesn't which makes it pretty unsolid. The delimiter of the OLEDB driver is used, and it can only be changed within this driver, so there is no simple way to change the delimiter from "," to ";" or "". Unfortunately the RFC4180 doesn't specify the "," as Separator. So per example in European countries the decimal separator is "," and not "." like in US. So in European countries ";" is used as csv delimiter most likely.
The delimiter can be specified in the registry at the following location: HKEY_LOCAL_MACHINE \ SOFTWARE \ Microsoft \ Jet \ 4.0 \ Engines \ Text "Format" = "TabDelimited" or "Format" = "Delimited(;)"
Nice approach with this OLEDB connection, but I guess this issue is hard to handle with it.
By the way, if someone got a solution for this, please let me know! I like this database query thingy. ;)
I still have a problem with the OleDbDataAdapter It always seems to use the first line of the file to determine how many columns there are, then will only read that number for the rest of the file. I have a seperated file where the line lengths are different - fewer columns on the first line than the rest of the file
First thanks for this tutorial!!!
If you get the error "Der 'Microsoft.Jet.OLEDB.4.0'-Provider ist nicht auf dem lokalen Computer registriert." or "The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine." you probably try to to compile it for 64 bit cpu target. The error is because there is not a 64 bit version of jet. I know because I had the same problem. To force your app to use the 32 bit change the target cpu to x86 in the advanced compiler options:
Project => Project Properties => Build => Platform target =>x64
What if Forign charecters are there. They are not uploading properly.
Thanks Buddy! Working From Reviews .Good Code!
is it possible to use this to add an additional column with a specified value for each of the rows?
Julian - Somewhat newby to C#...Your two posts were absolutely perfect. We don't have the luxury of changing our input/config file's layout (otherwise everything would be tilde- or pipe-delimited for this very reason.
I read in a CSV potentially having commas in quotes as literal values. I take that string input to build a class, then add it to a List collection of those classes. Yes, the constructor overrides allowing for either a string or array input...regardless, you shouldn't have to use a data-adapter to fill a data-table, especially if you class it out for use in other areas of the application - for mere display? sure - for classing out the type (and the props are dynamic) this is absolutely perfect...well done
Thanks for the article. Having a problem trying to read a list of reference numbers from a CSV file. When the 'number' contains an alpha the data is not being read so there's no way to notify the user about bad data. It works fine if the data is in an Excel spreadsheet. Sample Data: ITEM_REF_NO 905893 1000O 913798
Second record contains an alpha O at the end of the number but this data does not appear in the datatable.
My code is C# and is copied from the sample. string full = Path.GetFullPath(strFileName); + "]";
DataTable dt = new DataTable(); OleDbDataAdapter adpCSV = new OleDbDataAdapter(query, connString);
try { adpCSV.Fill(dt); // view the datatable here and it contains 3 rows but the second row is empty
Any ideas how to get this working?
Hey Cag,
I am having a similar issue. Although, I am reading zip codes.
for example:
ZIP: 77707 31820-3911 43062 12508-2000 33122
When I read that data using this parser it outpouts to a grid and the grid shows everying except record 2 and 4 with the dash lines, it is like the parser just makes it disapear when I output to datatable.
When I run the parser on other files with similar data the dash line and the rest of the zip code shows up. So, I am wondering if this is a whitespace issue or something. I am clueless, it works sometimes and other times it doesn't.
If anyone can help me and cag, Id appreciate it.
correction: record 2 and 4 show but they are just empty on the grid.
The issue your most likely facing is that the when you are importing your data, the Jet OLEDB is assigning the wrong datatype to your column. So, for example, if the first record in your text file only has numbers in a given column, the importer will set the data type for that column to numeric. If later the field has more than numbers, a hyphen for instances, the importer won't be able to handle it. This is common with zip codes, the first row just has 5 digits, but later some zip codes are in the XXXXX-XXXX format (which is not purely numeric).
The way around this is to manually define the schema of your text file. To do this you'll need to use schema files. Info about them can be found here:
Some things to keep in mind when using schema files: Points to remember before creating Schema.ini
?1. The schema information file, must always named as 'schema.ini'. 2. The schema.ini file must be kept in the same directory where the CSV file exists. 3. The schema.ini file must be created before reading the CSV file. 4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file.
This site is also helpful:
hah, Got it, just added/specified a delimiter in your code :P
thanks for this nifty trick!
when i run your source i see that everything is filled in one column - using the dAdapter.Fill(dTable).
is the fill method not able to set in the values in their respective columns?
//Line is string from a StreamReader string TmpLine = ""; string NewLine = ""; bool swap = false; int first = 0; int last = 0; for (n = 0; n \< Line.Length; n++) { if (Line.Substring(n, 1) == "\"" && swap == true) { swap = false; last = n; //next instance TmpLine = Line.Substring(first, (last - first) + 1);//get the quoted section NewLine = TmpLine.Replace(",", "\~"); Line = Line.Replace(TmpLine, NewLine); n++; //skip and extra 1 } if (Line.Substring(n, 1) == "\"" && swap == false) { swap = true; first = n; //first instance }
}
//Line is now a as per orginal CSV but with text,text,"text\~text" instead of text,text,"text,text" .... now just use string.split, to finish off. You can change the "\~" tilda later.
If you want to build a parser that will deal with quotes with commas in, just split the whole line by quotes first, then replace the commas in that field with a tilda or something, join the first split back up into a line, the split the line by comma and then replace the tilda....simples ! 10 mins and job done.
Er, no, because you'll replace the separators with a tilde too.
I tried this code:
But i'm having a problem with this file :
CategoryId, CategoryName 1,Cat1 2,Cat2 3,Cat3
After running the code (VB version), my datatable only contains one column, and each datarow contains a field with the entire text line
Is there a way to use the first line as column name definition?
I am trying to add a where clause to this algorithm. When my query is "SELECT * FROM [hospitalData.csv] WHERE zipcode \<= 80010" When I run this I also get the "Syntax error in FROM clause." The exception occurs when executing dAdapter.Fill(dTable);
Can anyone spot the problem? Is having a WHERE clause possible when using this technique?
Just chiming in to say that this is a great article, but I've also got an error message for this piece:
The error is:
"Could not find installable ISAM."
What gives?
I got that error too. \~Matt
getting a oleDBException with Cannot update, database or object is read-only.
on line Adapter.Fill() -------------------- try { //fill the DataTable dAdapter.Fill(dTable); } catch (InvalidOperationException /*e*/) { }
This may be your issue:
In my case the filename where too long (>70 characters) so the brackets didn't help. I renamed the file to Guid.NewGuid().ToString() .tmp and it solved my problem.
No doubt this is great, but i have face one problem with this. In my csv file i had decimal data type, after parsing csv using OLEDBConnection I checked the data and i came to know that my data is converted to integer.
Is there any solution for it???
Hi Mehul,
If you want to solve this problem, you may need to create a schema(.ini) file to define your data types in file.
Please refer to following links:
This is a great routine! Thanks for posting this gem.
getting a oleDBException with Cannot update, database or object is read-only.
on line Adapter.Fill() -------------------- try { //fill the DataTable dAdapter.Fill(dTable); } catch (InvalidOperationException /*e*/) { }
-------------------- i only added the headers = yes on the connection string. the file on disk is read/write and not open.
any idea what i could be doing wrong. I had this working before. the dat file is from a recently opened file and then resaved with a few lines ripped off and resaved as a dat file so the parser will work well. could the adapter think the file is still open for writing.??
Great writeup, that connection string has saved me a lot of time migrating a legacy system record file which I managed to get to CSV format.
Is there any provision for using a delimiter other than a comma?
FMT=Delimited(,)
Substitute that in, in place of the existing parameter, and change the comma to your delimiter.
Couldn't get this to work.
Is there a generalization that allows reading from delimited files that use a separator other than the comma?
Nice article.
I have found that there can be issues with some foreign characters example the file contained äüö and then the data table held äüö
But very handy for file with non foreign characters.
Thank you.
What to do If file has foreign characters?
When I try the above code I receive an exception "Syntax error in FROM clause." Excel can read the file with no questions or errors. The variable 'query' is set as follows "SELECT * FROM hba-info.csv". The exception occurs when executing dAdapter.Fill(dTable);
if you have “Syntax error in FROM clause.”... is beacause u must to clear the whitespaces in the filename.
Also, putting sqaure brackets around the file name in the FROM clause of the select string may fix the Syntax error in FROM clause problem.
Thanks for the great article. To answer Tim Drews question and I came up against the same thing. Pulled my hair out for 3 hours. If the FileName contains any "-" you need to Square Bracket the file name.
“SELECT * FROM [hba-info.csv]”
This worked for me thanks.
Thanks, DigitalDan3!
I was struggling with this.
Great stuff! I would never have thought of that... | http://tech.pro/tutorial/803/csharp-tutorial-using-the-built-in-oledb-csv-parser | CC-MAIN-2014-10 | refinedweb | 3,353 | 73.88 |
Related Posts:
Part I:
Part II:
Part IV:
Phew, okay, so we have covered a lot in the past two posts, and we're probably going to cover a whole lot more in this one.
A Note about Unity's built-in JSON Serializer
Before proceeding further, in my previous posts I said that .NET 2.0 did not have a JSON serializer. That is true. Unity, however, has had a JSON serializer in its API since some version in the 5.x branch. The benefits to the JSON serializer in the Unity API is that its fast. The problem is that it can't serialize and deserialize certain types of classes, including some within Unity's API (yes, you read that correctly). When it comes to JSON serialization with Unity you are going to have to make a decision - are you after speed or are you after the ability to serialize practically any class. As mentioned in Part II, our JSON serializer of choice is still Full Serializer. We like dictionaries and we like to have the ability to serialize them.
Notes on Offline Changes Since My Last Post
I added a Full Serializer folder in the project we created in our last post. You can follow the steps I used in Part II to create a folder if you forgot how to do that.
Alrighty, time to get to business. The first thing we are going to do is get Full Serializer into our project.
Adding Full Serializer to your Unity Project
The process is fairly straightforward, find Full Serializer on GitHub, download it, extract it and copy it into your Unity project. Details below for those who need them:
Go to the Full Serializer GitHub page:
Click on the green Clone or download button:
Click on the blue Download ZIP button. This will download the Full Serializer source to your download directory on your computer. For Windows users this would be your current user's download folder by default. I'm not sure how Macs work, so I don't know where web browsers on Macs save files from the Internet to.
Once downloaded, extract the zip file:
Go into the folder that was extracted, and navigate to the fullserializer-master\Assets\FullSerializer\Source folder. Once there, press CTRL+A to select everything, and then CTRL+C to copy it to the clipboard.
Next, go to your project's folder and navigate all the way into our Serialization Folder (from Part II). Right click, and select Paste.
Now go back to Unity. When Unity gets focus, it will detect that a change has occurred under the Assets folder, triggering a recompilation of your project.
Note: This is not where I would put Full Serializer normally; I usually create a "Third Party" folder in my projects, but for the sake of this tutorial I didn't want to go through all the screen shots just to create a more organized folder structure. You really can copy it anywhere you want in your project.
Okay, so now we have added, imported, copied (use whatever term you want) Full Serializer into our new Unity project and can start writing our Serialization Manager.
The Singleton Design Pattern, Unity and Multithreading
So, here's the lowdown. The Unity engine itself is multi-threaded. However, the Unity API itself is not thread safe and will throw an error if you attempt to make API calls or use an object of a Unity API class in a thread other than the main thread.
Back in the day, when I was young and had more hair on my head the Unity engine was not as multi-threaded as it is today, contributing to lower performance. Today its one of their top priorities and throughout every revision, even the minor ones, there seems to be something in the release notes relating to another thing that gets the multi-threading treatment.
Which brings us to singletons. You can make them thread safe, or you can make them not thread safe. As a developer, I prefer to err on the side of caution and optimism, in this case that one day I would be able to execute API calls in separate threads so I prefer to use a thread safe singleton.
To clarify, you can write .NET 2.0 C# code in your Unity project that executes in their own threads if the code that is executing does not use any Unity classes from the Unity API or function calls into the Unity API. However, if you do this you restrict yourself from build targets like WebGL that do not support multi-threading. The caveat is that there is a new technology in the works called WebAssembly which WebGL will sit on top of eventually that does support multi-threading. If your building for iOS, Android, MacOS, Windows, Linux... you should be fine, but if you want your game to run as an HTML5 / WebGL game in a browser window (like hosted somewhere and accessed through Facebook), you're out of luck. No multi-threading your code then, at least not until WebAssembly arrives and is supported by Unity.
Ok, so we are going to with a thread safe (not multi-threaded, but thread safe for the future) singleton pattern, which can be found here (the last example at the very bottom):
It looks like this:
using System; public sealed class Singleton { private static volatile Singleton instance; private static object syncRoot = new Object(); private Singleton() { } public static Singleton Instance { get { if (instance == null) { lock (syncRoot) { if (instance == null) { instance = new Singleton(); } } } return instance; } } }
What Does All That Code Do?
Glad you asked. Lets run through it real quick.
using System;
System is what we call a namespace. A namespace is a way to classify a set of related objects. Saying that we are using the System namespace in this file gives us access to all the objects (mainly classes) contained within the System namespace.
public sealed class Singleton
The word public tells us the accessibility level of the following class. Public makes this class accessible from anywhere in code which is what we are looking for. A more complete explanation of accessibility levels can be found here:.
The word sealed prohibits any derived classes from this class.
The word class tells the compiler we are declaring a class.
The word Singleton is the identifier for the class we are declaring, so we have a way to reference this class in our code.
After that we encompass the declaration of our class in curly braces ({, }) and can start defining our newly declared class.
private static volatile Singleton instance;
The word private tells us the accessibility level of the variable we are declaring. Because it is private, the variable named instance is not accessible outside of this class or in any class derived from this class (which can't happen because we declared this class with the sealed keyword).
The word static means that this variable does not require an instance of this class to exist (of course, the variable is an instance of the class we are declaring as this variable represents the one instance of this class that will ever exist while your app/game is running).
The volatile word has to deal with multi-threading. When your source code gets compiled, the compiler (the program that converts your source code into a program or library) can optimize your code to make it run as fast as possible. Sometimes those optimizations only work under certain assumptions, like your program or game will be run on one thread. Assigning the volatile keyword to a variable prevents the compiler from performing optimizations that would make the variable be out of synch if being used in a multi-threaded project.
The word Singleton tells the compiler what type of variable we are declaring; in this case a variable of type Singleton.
The word instance is the name we are giving this variable.
private static object syncRoot = new Object();
This line is fairly similar to the previous line with two exceptions; first, instead of declaring a variable of type Singleton it is declaring a variable of type object (System.Object, to be exact - there's that namespace thing again!). The second difference is that we are instantiating an object of that type (object) when its declared.
So what is this syncRoot variable all about? Well, to make a variable or function thread safe one method is to have a variable that is "locked". When that variable is locked the code following it cannot be called again until the lock is released. This way, in a multi-threading environment only one thread can be in the code following a lock statement on a variable, ensuring that you don't get any messy bugs with variables being overwritten in other threads as they are being used on the current thread.
private Singleton() { }
Phew, okay. Moving along... these three lines are what we call a constructor. The constructor of a class is a special type of function. It bears the same name of the class. The private keyword here ensures that only the class itself can instantiate an object of this type. Even though the constructor has no code in it, we will need it defined to fully implement the singleton (so that we have an instance).
The next (and final) part of a basic thread safe singleton comes with the rest of the code. The code defines what we call a property. Properties are often used to provide a way to access private or protected variables in a class - in this case our private Singleton instance.
A simple example of a property looks like this:
public class SomeClass { private Int32 m_SomeInteger; public Int32 SomeInteger { get { return m_SomeInteger; } set { m_SomeInteger = value; } } }
Since mSomeInteger is private nobody can access it directly; however, we have defined a property that implements both getting the value of mSomeInteger and setting the value of m_SomeInteger. Example below:
SomeClass someClass = new SomeClass(); someClass.SomeInteger = 123; Int32 aVariable = someClass.SomeInteger;
In the example code above, we create an instance of SomeClass. Then, we use the SomeInteger property to set the value of m_SomeInteger and then get the value of m_SomeInteger.
Some of the benefits of using a property to get or set the value of a variable include being able to keep your variable private in your class (no direct access to the outside world) and that access to your variable can be mitigated by a set of rules in the get code block and set code block. In fact, this is what the property for our Singleton instance does.
When the Singleton instance is accessed through the get code block of the Singleton property a check is made to see if it has already been created or not (the null check). If it hasn't been created yet, we use our syncRoot object to lock access to the code in the following curly braces, which does another instance check. Why? Because in between the time of the first instance check and the locking of the syncRoot object another thread (assuming we had gameplay code running on separate threads) could have tried to do the same thing. If it is still null, we create our Singleton instance.
Alright, well, we got some coding in. The last thing I'm going to do on this post is take the boilerplate Singleton code that we went through and turn it into our Singleton Manager, using some naming conventions that I use personally. Here's how it looks in the SingletonManager.cs file that we created in Part II:
using System; public sealed class SerializationManager { private static volatile SerializationManager m_Instance; private static object m_syncRoot = new Object(); private SerializationManager() { } public static SerializationManager Instance { get { if (m_Instance == null) { lock (m_syncRoot) { if (m_Instance == null) { m_Instance = new SerializationManager(); } } } return m_Instance; } } }
And finally, the Serialization Manager class in its current form in Visual Studio:
And with that, we'll call it a wrap. In the next post we will begin fleshing out this class with serialization, deserialization and helper functions.
If you have any questions, comments or suggestions let me know. I'm usually in late-night zombie mode when I write this stuff so its easily possible that I may have missed something.
Thanks for reading!
Hi @theramahal. I work at Unity, the game engine and I'd definitely like to get your insight on how you'd use cryptocurrency across a mobile platform with 50% of all mobile games? I’m wondering if you or your audience is familiar with Unity Game Engine and Development Platform. If so, from a Gaming or social Perspective, how would you use Crypto Currency in the Unity Game Engine? | https://steemit.com/unity/@therajmahal/developing-games-with-unity-serialization-part-iii | CC-MAIN-2018-51 | refinedweb | 2,127 | 59.43 |
0
iam trying to store all the data in to a struct so i can change it later.
//this data is in file
abc 1 11 22
def 2 33
ghi 0 22 11 33
asd 2
i was planing on making a array of structs. so 1st line will have one struct.
2nd line will have 2nd struct.
struct node { char *alp; int x; int ar[]10; }; int main() { int i = 0; char *array[20]; struct node buff {NULL, 0, 0000000000}; while(read one line(LOOP) fscanf()) - array[i] = buff->alp = abc; - array[i] = buff->x = 1; - array[i] = buff->ar = 22; i++; } | https://www.daniweb.com/programming/software-development/threads/421231/struct | CC-MAIN-2017-47 | refinedweb | 104 | 86.23 |
![endif]-->
Arduino
<![endif]-->
Buy
Download
Products
Arduino
AtHeart
Certified
Learning
Getting started
Examples
Playground
Reference
Support
Forum
Advanced Search
| Arduino Forum :: Members :: Earendil
Show Posts
Pages: [
1
]
2
1
Using Arduino
/
Sensors
/
Re: Fuel Pressure Sensor for Vehicle Application
on: January 28, 2013, 08:25:44 pm
Quote from: Hauge on January 25, 2013, 01:45:07 pm
the simplest solution (I think) would be to use an analog oil pressure sensor
I hadn't thought to use an automotive oil pressure sender... So something like this perhaps?
This is where I'm rather unsure about how "most" pressure units work. Are they linear output, such that getting a handful of readings from it I can map the rest out pretty easily? Because I'm not finding a spec sheet for it, and I wouldn't really expect to find one for a cheap generic unit like that.
Thoughts? It certainly borders on cheap enough to play around with, but I don't like burning $20s either if someone can tell me that's all I'd be doing
2
Using Arduino
/
Sensors
/
Re: Fuel Pressure Sensor for Vehicle Application
on: January 25, 2013, 12:27:08 pm
Quote from: michinyon on January 23, 2013, 11:38:08 pm.
Yeah. The definition of old keeps on changing. While the car is an 89, the body dates to 1984, which means the thing was designed 30 years ago. That's a long time in the automotive world so when choosing between calling it a "new car" or an "old car" I tend to go with the latter, even if it is post many of the seminal car advancements of the last 60 years.
3
Using Arduino
/
Sensors
/
Re: Fuel Pressure Sensor for Vehicle Application
on: January 24, 2013, 01:32:28 pm
Quote from: retrolefty on January 23, 2013, 09:27:31 pm
Quote
Thanks
Thanks, Lefty. The Data sheet states
"Chemical Compatibilities: Any gas or liquid
compatible with 304L & 316L Stainless Steel. For
example, Motor Oil, Diesel, Hydraulic fluid, brake fluid,
water, waste water, Hydrogen, Nitrogen, and Air."
So I think I'm good. And it's nice to know what I'm paying for when buying a $100 pressure sensor!
4
Using Arduino
/
Sensors
/
Re: Fuel Pressure Sensor for Vehicle Application
on: January 23, 2013, 08:45:41 pm
Hey! A Reply! I'll take it
The car is a 1989 BMW 325i, and uses a standard Bosch FPR. and I'm positive that's the correct pressure.
Out of curiosity, do you think it's oddly high, oddly low, or odd that it's measured in Bars?
5
Using Arduino
/
Sensors
/
Re: Fuel Pressure Sensor for Vehicle Application
on: January 23, 2013, 03:12:50 pm
I wish I knew if the 21 views and no replies means that I'm spot on, or so far out in left field that I'm not worth helping...
Since I can't decide which one it probably is, I'm probably out in left field, aren't I?
6
Using Arduino
/
Sensors
/
Fuel Pressure Sensor for Vehicle Application
on: January 21, 2013, 05:06:57 pm
I'm looking for some validation on my choice of products for my intended purpose. I'm a software engineer (UI level) with an old car as a mechanical hobby. Sometimes these two things collide in the middle would of electronics that I know so little about.
Quick explanation of my project: To use my arduino to moitor and log vehicle parameters for the purposes of letting the operator know when those parameters are out of bounds.
My current questions revolve around monitoring the fuel pressure, and the correct pressure sensor to use. I'm trying to come up to speed on the world of sensors and how one describes the sensors, but I could use some help in knowing if my chosen sensor will fit the bill. I try and avoid as many "$90 whoopies!" as I can.
I think I've narrowed it down to the following set of sensor:
What remains, I am not sure about. For example, for the Arduino and my purpose, which is the ideal output range, 0.5v-4.5v or 1v-5v? How about input voltage? 8v-30v sounds super flexible, but what is the trade off? Any chance that the Hall effect might make a measurable impact on the readings?
The following are all the project parameters that I'm aware of for this particular sensor:
Pressure of fuel line 3 bars (43.5psi). High pressure unlikely, mostly detecting low pressure.
Under hood sensor location. Need to withstand heat and remain uneffected. However, a cooler location can be chosen so it shouldn't need to withstand anything close to 100C
Optional voltage for powering: Varying 12.6v-14v car power
Or
5v Arduino output.
Accurate to within 5 PSI
And for the sake of argument, let's assume for a second that I know how to blow things up, and
not
blow things up. Unless someone thinks the sensor will be susceptible to fuel or failure based on the characteristics of the sensor's data sheet, let's start with choosing the sensor and assume I'll be safe
Thanks guys! Let me know if you think I'm missing any parameters that should be taken into consideration when trying to make this measurement.
7
Forum 2005-2010 (read only)
/
Interfacing
/
Re: Minicom doesn't pick up serial communication
on: May 10, 2010, 01:08:58 am
I'm open to all suggestions and clarifications!
I'm aware that one can't have both the Arduino USB serial AND pin 0 and 1 (tx/rx) active at the same time. That is correct?
That's the reason the code is what it is right now. I don't have an independent power source at the moment, so I need the USB plugged in for power.
But if you mean on the receiving end using the same port, I can assure you it's two different ports, because it's two different hardware devices
8
Forum 2005-2010 (read only)
/
Interfacing
/
Re: Minicom doesn't pick up serial communication
on: May 09, 2010, 09:22:45 pm
Thanks for the quick reply!
I took a look through the datasheets, and there is some good info in there. However the one other thing besides baud rate that I KNOW is correct is the device/port number.
I'm making a new cable now, just to eliminate that as a potential problem. However I've gone through my breadboard and wires with a multi meter to make sure things are going where I think they should be. Not that it's in the least bit complicated :-?
And sorry about not using a CODE tag, but I wasn't sure what it is on this board, and out of the half million buttons I don't see one for adding code tags
9
Forum 2005-2010 (read only)
/
Interfacing
/
Minicom doesn't pick up serial communication
on: May 09, 2010, 08:46:26 pm
Long time lurker, first time being an OP. I've searched high and low for an answer, and I'm finally resorting to any help this gracious community can offer.
I have grand plans in mind, but being new to the Arduino, I'm attempting baby steps. eventually I want the arduino to communicate via the rx/tx lines to my own personal software project. As a first step, I wanted to see if I could get the arduino to send data to minicom, a solid well known cmd line program for serial communication. However even that first step has failed, and I really don't know why.
Here is what is running on the arduino, put there by the 018 IDE running on OSX.
#include <NewSoftSerial.h>
NewSoftSerial mySerial(2, 3);
void setup()
{
Serial.begin(4800);
mySerial.begin(4800);
}
void loop()
{
Serial.println("Hello, world!");
mySerial.println("Hello, world?");
delay(500);
}
Here is what I know:
The USB cable is connected, and the Arduino serial monitor picks up the traffic over the USB cable.
I am wired rx <--> tx and vice-versa
The baud rates are matched on the sending and receiving.
minicom is in a state of "OFFLINE", which has something to do with a DCD line?
The tx LED on the arduino fires repeatedly with the proper delay.
My knowledge of serial communication and minicom are close to nil. It's probably a settings problem, but I can't for the life of me figure it out, or find someone else that has explained it.
Any help out there?
10
Forum 2005-2010 (read only)
/
Development
/
Serial Read in Objective C under OSX?
on: May 25, 2009, 03:12:40 pm
Greetings everyone!
I'm quite new to the Arduino community, and have only last night gotten one of the sample programs up and running.
I'm in the "Feasibility and Research" part of my project planning, so I have some general questions that google just isn't answering for me. I'm hoping someone around here can.
A tiny bit of background: I'm a recent CS graduate, with little electronic experience, and no real OSX programming experience. As a personal project, I want to read simple analog signals using the Arduino, and pass those signals to my computer as integer values. I would then write a program in Objective C, and be able to read those integer values in and display them.
But before I dive in, what I can not find an answer to, is if it's possible to read in a "serial port" in Objective C/OSX from the Arduino board, while they are connected via USB. The Arduino IDE appears to do just this sort of thing, however since I can find no examples or talk of it on the internet, I'm a bit nervous that what I want to do is impossible, or near enough to it that no one bothers.
Can anyone answer this for me?
Examples/ links to how-tos would be awesome, but unnecessary at this point. Knowing it's possible is what I really care about :-)
Thanks everyone!
~Tyler
11
Forum 2005-2010 (read only)
/
Interfacing
/
Re: Connecting Arduino to iPhone/iPod serial
on: August 23, 2010, 02:52:39 pm
That would be awesome. I'll probably work on it too since my project is an educational endeavor. Still, an iPod touch display screen falls into the feature creep category of my project
12
Forum 2005-2010 (read only)
/
Interfacing
/
Re: Connecting Arduino to iPhone/iPod serial
on: August 22, 2010, 09:37:25 pm
Son of a.... I have the 1.4 breakout, and I was not aware of the manufacturing error. 30 seconds with a soldering iron and I had my iod touch screaming "Hello World!" that was produced by the arduino! Conrad, I could almost kiss you :-/
13
Forum 2005-2010 (read only)
/
Interfacing
/
Re: Connecting Arduino to iPhone/iPod serial
on: August 04, 2010, 05:21:29 pm.
14
Forum 2005-2010 (read only)
/
Interfacing
/
Re: Connecting Arduino to iPhone/iPod serial
on: August 04, 2010, 05:07:14 pm.
15
Forum 2005-2010 (read only)
/
Frequently-Asked Questions
/
Re: Picking a board, newbie help needed
on: March 13, 2009, 02:58:43 am
Thank you very much yet again for the info!
Google had yet to find that particular wiki :-)
So this all looks very doable without becoming a(n) EE.
Unless someone cares to contradict my current understanding, or someone else's advice, I guess my only other question is this: Is it programatically, either in Sketch or in whichever computer-side language is chosen, to read in multiple analog signals concurrently? I'm not asking "how", so much as "is it possible with ease". I want to make sure I can get this board to do what I want it to do, without becoming a EE, before I purchase it and sink time into a project. After all, I do still have this darned CS degree to finish ;-)
Pages: [
1
]
2
|
SMF © 2013, Simple Machines
Newsletter
©2014 Arduino | http://forum.arduino.cc/index.php?action=profile;u=9877;sa=showPosts | CC-MAIN-2014-35 | refinedweb | 2,036 | 68.81 |
Tabular form This form contains unsaved changes messageHarryF Apr 4, 2014 11:40 AM
I am trying to get rid of that message because I have a collection behind the TF so I don't need this reminder. I found this code:
function resetFormCachedData() {
var c = $x_FormItems($x('report_p294GridEdit'));//report static id
for (var e = 0; e < c.length; e++) {
if (c[e].name != "X01") {
gTabFormData[e] = c[e].value
} else {
gTabFormData[e] = "0"
}
}
}
Its not clear what goes in place of gTabFormData.
I tried searching through the DOM for report_p294GridEdit in firebug but it looks like firebug search only looks at open nodes.
1. Re: Tabular form This form contains unsaved changes messageTexasApexDeveloper Apr 4, 2014 1:06 PM (in response to HarryF)
You could look at this link and see if this solution would suffice..: APEX 4.1 is it possible to remove pagination "unsaved changes" message? - Application Express
Thank you,
Tony Miller
Ruckersville, VA
2. Re: Tabular form This form contains unsaved changes messageHarryF Apr 4, 2014 1:26 PM (in response to TexasApexDeveloper)
That does not work. I get
ReferenceError: gTabFormData is not defined
gTabFormData[e] = c[e].value
3. Re: Tabular form This form contains unsaved changes messageTexasApexDeveloper Apr 4, 2014 2:06 PM (in response to HarryF)
Ohh, looking further into the posting.. It depends upon the user submitting the page... Thus the array is available then...
Thank you,
Tony Miller
Ruckersville, VA
4. Re: Tabular form This form contains unsaved changes messageTom Petrus Apr 7, 2014 1:05 AM (in response to HarryF)
What apex version? Very important as the javascript files changed between 4.1 and 4.2 for example. gTabFormData is used in the widget.tabular.js file and is in the apex.widget.tabular namespace.
Check the file to get some ideas. Eg function _setModified will loop over the items and compare values. You will also need to make sure that upon pagination you are not met with the message. Probably the easiest way would be to update the value in gTabFormData as the value is changed in the form/backend.
5. Re: Tabular form This form contains unsaved changes messageHarryF Apr 7, 2014 9:48 AM (in response to Tom Petrus)
One thing about my approach. It has a query against a collection behind it. I don't know if tabular forms behave differently when there is a table bound to it versus a query against a collection.
4.2. This is what I see at namespace
/**
* @namespace apex.widget.tabular
**/
apex.widget.tabular = {};
This is the setModified. It references gTabFormData.
function _setModified ( pRegionId ) {
var lModified = false,
lItems = $x_FormItems( $x( "report_" + pRegionId )),
lItemCount = lItems.length,
lChangedItemCount = tabular.gChangedItems.length;
// Iterate over items and check against original values. Also exclude pagination select list (X01) from check.
// Set highlight and modified flag.
for ( var i = 0; i < lItemCount; i++ ) {
if ( ( tabular.gTabFormData[ i ] !== lItems[ i ].value ) && ( lItems[ i ].name !== "X01" ) ) {
$x_Class( lItems[ i ], "apex-tabular-highlight" );
lModified = true;
}
}
// Iterate over previously changed items (built on add row), and set highlight and modified flag.
for ( var j = 0; j < lChangedItemCount; j++ ) {
$x_Class( tabular.gChangedItems[ j ], "apex-tabular-highlight" );
lModified = true;
}
return lModified;
}
6. Re: Tabular form This form contains unsaved changes messageHarryF Apr 7, 2014 9:52 AM (in response to HarryF)
Is there a way to see what js files are open? I am assuming the file under $ORACLE_HOME is in use but maybe not.
Is there a way to overload the paginate function locally so it is not used?
7. Re: Tabular form This form contains unsaved changes messageTom Petrus Apr 7, 2014 10:13 AM (in response to HarryF)
If you have a wizard generated form it'll use apex.widget.tabular. Not so if you have a query built with apex_item. It doesn't care if you use a table or a view.
You don't need to check files on the filesystem, that won't help. The files are simply references by url and fetched, they're not getting locked. All the apex namespaced objects are included in the desktop-all javascript file when you run the page in normal mode, but you'll get a better idea of which files are being used when you run in debug mode and check the page source or script tag in browser developer tools.
There is also no real way to alter the check for modifications. It's also worth mentioning that once you start modifying this behavior you have to be aware that the behavior may change in next version. The docs state that they are internal functions.
I wouldn't override paginate, it'll only cover the pagination page and not other page submissions I believe.
It might be worth a try to bind change handlers to the items involved, and set their values to the gTabFormData array as they're being changed. That way it'll not detect any changes.
This is what the code you posted does: loop over each item in the form and store the value in in gTabFormData. To reference gTabFormData in 4.2, all you need to do is change it to apex.widget.tabular.gTabFormData.
8. Re: Tabular form This form contains unsaved changes messageHarryF Apr 7, 2014 10:34 AM (in response to Tom Petrus)
This code worked in 4.2
function resetFormCachedData() {
var c = $x_FormItems($x('report_MYSTATICID')); // report static id
//alert('len='+c.length);
for (var e = 0; e < c.length; e++) {
if (c[e].name != "X01") {
apex.widget.tabular.gTabFormData[e] = c[e].value
} else {
apex.widget.tabular.gTabFormData[e] = "0"
}
}
}
Thanks
9. Re: Tabular form This form contains unsaved changes messageTexasApexDeveloper May 1, 2014 10:55 AM (in response to Tom Petrus)
Tom,
Sorry to jump into this conversation late, but you said above : "...If you have a wizard generated form it'll use apex.widget.tabular. Not so if you have a query built with apex_item." What does APEX use if you have a manually created tabular for to track changes?
Running into a tabular form with 2 lov's (cascading lov's to boot) and the message is showing up when we paginate, even after having the above function firing, it does not seem to find the X01 when it goes looking for the name object..
(APEX 4.2, Oracle 11g, APEX Listener 2.02, Firefox 17 browser)
Thank you,
Tony Miller
Ruckersville, VA
10. Re: Tabular form This form contains unsaved changes messageHarryF May 1, 2014 11:15 AM (in response to TexasApexDeveloper)
Tony, one thing I ran into is a scoping problem that I don't understand.
This gave me the pagination message:
apex.server.process("PROCESS",
apex.server.process("PROCESS",
{"x01":Id},
{dataType: "text", success: function(pData){
$('input[name="' + teName + '"]').each(function(){
update items on the page
});
}});
resetFormCachedData();
}});
This fixed it:
apex.server.process("PROCESS",
apex.server.process("PROCESS",
{"x01":Id},
{dataType: "text", success: function(pData){
$('input[name="' + teName + '"]').each(function(){
update items on the page
});
resetFormCachedData();
}});
}});
Given these arrays should be scoped outside of this function I don't understand this. I nested multiple levels and I needed to put this at the deepest nested code.
harry
11. Re: Tabular form This form contains unsaved changes messageTom Petrus May 2, 2014 7:34 AM (in response to TexasApexDeveloper)
Tony,
When you have a classic report where you use apex_item in the source, apex has no notion of the region being a tabular form. Hence it will not include the tabular widget javascript files. I don't know why it would in your case, something in your setup may be different? Are you sure the region type is not "SQL Query (updateable report)" - which would indicate the region is a tabular form? Or are you use a save before exit plugin or something similar? When I test this with a simple example on a fresh page then I will not get warning messages - at all, just as expected.
x02 is used for the row selector column included in a tab form, more specifically the check-all checkbox. I'm not sure what x01 stores. A new page with a tab form on it will not have x01 stored.
Harry,
That javascript looks very wrong. Why do you have 2 apex.server.process calls nested in eachother? That explains things going wrong really. Also, note that apex.server.process is an async call. You should reset the cached data in the success function just like the items you are changing. That is why the second call works - "more". The reason why you are getting the message depends on what your setup is. For instance, where do you call this code? Did you bind it somehow to the pagination for example? The time and place where you call this may greatly matter. Don't forget that when paginating apex will still use it's internal functionality and check for changes. If your cache-reset is too late, either through executing too late or being executed too late, then it will of course still pop up the message.
12. Re: Tabular form This form contains unsaved changes messageTexasApexDeveloper May 2, 2014 10:37 AM (in response to Tom Petrus)
Tom,
My apologies, Harryf and I work on the same project, but on different pieces of it. The above question I have was from the 3rd member of our team that is working on a manually built tabular form. They initially built the tabular form through the wizard, but after it was created, took the select that was reading from a table and now has it reading from a collection built on the form. He is having to use the apex_item since 2 out of 3 controls on the form are are cascading lov's. Since tabular forms don't support cascading lov's exactly like normal forms do, he has coded his own items with an on-demand process to handle the syncing of the lov's (pre APEX 4.x coding).
How can we find where APEX is storing the changed items information on a manually built tabular form, so we can disable that annoying message for our users?
(I personally don't know why Harryf has gone and coded his tabular form using a manual tabular form.. He also is using a cascading lov, but I believe you can mix having wizard built controls and the manual apex_item in your tabular forms, can you not?)
Thank you,
Tony Miller
Ruckersville, VA
13. Re: Tabular form This form contains unsaved changes messageHarryF May 2, 2014 9:25 AM (in response to Tom Petrus)
This was just psuedo code showing the nesting. There is a lot more code. I was just demonstrating. If that outer server process call was not a server process but maybe a jQuery loop that loops through some objects in the DOM I still ran into the problem.
This was on an onChange Dynamic Action.
14. Re: Tabular form This form contains unsaved changes messageTom Petrus May 2, 2014 9:59 AM (in response to TexasApexDeveloper)
Tony,
I see, you're gangin up on me Sure, you can use apex_item calls in a wizard form - I assumed that this was either manual or wizard, not a mix (I hate the mix )
The code above which refills gTabFormData should work - when you look in the javascript file widget.tabular.js, it is _setModified which is called. It loops over the items in the form and compares them with the values in gTabFormData. If there is a change -> show the message. Shouldn't matter whether the item is generated through apex_item or by apex.
That being said, you can do a manual compare by inspecting the values in apex.tabular.gTabFormData and the values retrieved by running
$x_FormItems( $x( apex.tabular.gTabFormReportID ) );
It stands to reason that by setting the values in gTabFormData equal to those found in the form, no changes should be detected. If it does, something is going "wrong". Can't tell out of hand what.
Harry,
Good - But I wouldn't have been surprised otherwise too Well - if there are nested calls the execution order is still important of course. It can matter a great deal where you are executing the code of course. Yet again, I can't tell out of hand since your other code may have an effect on it.
ps: I found where x01 is residing: it's the pagination select list (didn't use the select list which is why I didn't find it right away). | https://community.oracle.com/message/12363916 | CC-MAIN-2015-48 | refinedweb | 2,096 | 64.71 |
Ive searched high and low to no avail! Im New to unity and as such am experimenting using android accelerometer to roll a sphere around my phone display. I found some code that seems to work for others, but for me I only get tilt in z axis (wrt display, x axis wrt main camera) but I require tilt motion control for left, right, back and forth. Probably something really simple i've overlooked, so feeling foolish in anticipation...
Any help will be greatly appreciated!
Here is essentially the code im using;
Vector3 dir = Vector3.zero ;
dir.x = - Input.acceleration.y ;
dir.z = Input.acceleration.x ;
if ( dir.sqrMagnitude > 1 )
dir.Normalize();
dir *= Time.deltaTime;
transform.Translate ( dir * speed );
Vector3 dir = Vector3 .zero ;
dir. x = - Input. acceleration . y ;
dir. z = Input .acceleration . x ;
if ( dir. sqrMagnitude > 1 )
dir . Normalize();
dir *= Time . deltaTime;
transform. Translate ( dir * speed );
Using a HTC? You accelerometer may be broken, or needs calibrating.
That's 63 people who have potentially viewed your question looking to answer it :)
can anyone see where im going wrong? dont get me wrong.. im not expecting to have my hand held atm but I am green lol
Answer by Graham-Dunnett
·
Jul 04, 2014 at 03:10 PM
meat5000... im on a Sony c1505. I have considered calibrating the accelerometer since I get some weird things happening if, say, I hold the phone upright when loading the app.. methinks I need to zero it first, so thanks and ill keep it in mind :-)
lets see if I can post today.. big thanks to boredmormon and graham for ur help. ill see if that code solves my issue
Try
Debug.Log(Input.acceleration);
This should show you if you have a hardware issue.
Also, consider this
Accelerometer is supposed to measure changes in position, literally the acceleration of the device. Gyroscope, in fact, is what shows you the orientation.
using UnityEngine;
using System.Collections;
public class tilty : MonoBehaviour {
//Use this for initialization
void Start ()
{
var tilt : Vector3 = Vector3.zero;
var speed : float;}
private var circ : float;
private var previousPosition : Vector3;
@script RequireComponent(Rigidbody);
function Start()
{
//Find the circumference of the circle so that the circle can be rotated the appropriate amount when rollin
circ = 2 * Mathf.PI * collider.bounds.extents.x;
previousPosition = transform.position;
}
function Update (){
tilt.x = Input.acceleration.y;
tilt.z = -Input.acceleration.x;
rigidbody.AddForce(tilt * speed * Time.deltaTime);
}
function LateUpdate(){
var movement : Vector3 = transform.position - previousPosition;
c:\Users\DAVE\Documents\New Unity Project 3\Assets
\tilty.cs: Error CS1519: Invalid token ';' in class, struct, or interface member declaration (CS1519) (Assembly-CSharp)
movement = Vector3(movement.z,0, -movement.x);
transform.Rotate(movement / circ * 360, Space.World);
previousPosition = transform.position;
}}
// Update is called once per frame
//void Update () {
//}
//}
Added this code... got 5 error messages:
c:\Users\DAVE\Documents\New Unity Project 3\Assets\tilty.cs: Error CS1001: Identifier expected (CS1001) (Assembly-CSharp)
c:\Users\DAVE\Documents\New Unity Project 3\Assets\tilty.cs: Error CS1519: Invalid token ';' in class, struct, or interface member declaration (CS1519) (Assembly-CSharp)
c:\Users\DAVE\Documents\New Unity Project 3\Assets\tilty.cs: Error CS1002: ; expected (CS1002) (Assembly-CSharp)
Have tried tweaking.. no joy. i dont expect anyone to hold my hand thru this but i really am scuppered atm!!
Answer by dare00
·
Jul 12, 2014 at 03:06 PM
Ok I found this: and with some tweaking it does what get a force from acceleromoters instead of angle?
0
Answers
Multiple Cars not working
1
Answer
How can I know when the user slide his finger?
0
Answers
Detect Gyroscope's rotation around a single axis? Like a car game.
0
Answers
Unity3D cross-platform input wireless
1
Answer | https://answers.unity.com/questions/737820/why-cant-i-get-android-accelerometer-working-in-tw.html | CC-MAIN-2020-40 | refinedweb | 616 | 51.65 |
This section demonstrates you how to create a new file.
Description of code:
Manipulating a file is a common task in programming. Java makes this easy by providing many useful tools. Through the use of these tools, you can can easily create a new empty file. Now for this task, we have used the method createNewFile() of boolean type which creates a new empty file if and only if a file with this name does not yet exist.
createNewFile(): This method of File class creates an empty file at the specified file location if a file with this name does not exist.
exists(): This method of File class checks whether the given file exists or not.
Here is the code:
import java.io.*; public class FileCreate { public static void main(String[] args) throws Exception { File file = new File("C:/newfile.txt"); if (file.exists()) { System.out.println("File already exists"); } else { file.createNewFile(); System.out.println("File is created"); } } }
Through the method createNewFile(), you can create a file.
Output:
Advertisements
Posted on: April | http://www.roseindia.net/tutorial/java/core/files/createfile.html | CC-MAIN-2016-40 | refinedweb | 173 | 67.45 |
say i wanted to do something at random about 20 percent of the time.
the best way i figure is to pick a random number between 0 and 100. pick a range between
0 and 100. the difference in the range is the percent. 20 and 40 is 20 percent;
if the random number is within this range, then "this is the 20% percent of the time".
because what i just typed doesn't make much sense, say i was creating a texas hold'em poker game for example. typically, most players raise before the flop with Ace-King. But to throw people off, some players don't do this and just call. I want to 'just call' about 20 percent of the time and at random. I want to figure out when "20 percent of the time" is.
(btw, i'm not creating a poker game. i've just been thinking about it. that's a little out of my league right now)
1. do you agree with the logic?
2. is there a more elegant way to do this?
3. should this be in the AI section?
Code:void main() { if( between( GetRand(0, 100), 1, 20 ) ) dontRaise(); return void; // :-P } bool between(int n, int x, int y) { int temp; //min-max if(x > y) { temp = x; x = y; y = temp; } if(n >= x && n <= y) return true; return false; } //taken from FAQ int GetRand(int min, int max) { static int Init = 0; int rc; if (Init == 0) { srand(time(NULL)); Init = 1; } rc = (rand() % (max - min + 1) + min); return (rc); } | https://cboard.cprogramming.com/cplusplus-programming/63408-random-numbers-advanced.html | CC-MAIN-2017-04 | refinedweb | 264 | 90.5 |
String interpolation is a process substituting values of variables into placeholders in a string. For instance, if you have a template for saying hello to a person like "Hello {Name of person}, nice to meet you!", you would like to replace the placeholder for name of person with an actual name. This process is called string interpolation.
f-strings
Python 3.6 added new string interpolation method called literal string interpolation and introduced a new literal prefix
f. This new way of formatting strings is powerful and easy to use. It provides access to embedded Python expressions inside string constants.
Example 1:
name = 'World' program = 'Python' print(f'Hello {name}! This is {program}')
When we run the above program, the output will be
Hello World! This is Python
In above example literal prefix
f tells Python to restore the value of two string variable name and program inside braces
{}. So, that when we
This new string interpolation is powerful as we can embed arbitrary Python expressions we can even do inline arithmetic with it.
Example 2:
a = 12 b = 3 print(f'12 multiply 3 is {a * b}.')
When we run the above program, the output will be
12 multiply 3 is 36.
In the above program we did inline arithmetic which is only possible with this method.
%-formatting
Strings in Python have a unique built-in operation that can be accessed with the
% operator. Using
% we can do simple string interpolation very easily.
Example 3:
print("%s %s" %('Hello','World',))
When we run the above program, the output will be
Hello World
In above example we used two
%s string format specifier and two strings
Hello and
World in parenthesis
(). We got
Hello World as output.
%s string format specifier tell Python where to substitute the value.
String formatting syntax changes slightly, if we want to make multiple substitutions in a single string, and as the
% operator only takes one argument, we need to wrap the right-hand side in a tuple as shown in the example below.
Example 4:
name = 'world' program ='python' print('Hello %s! This is %s.'%(name,program))
When we run the above program, the output will be
Hello world! This is python.
In above example we used two string variable name and program. We wrapped both variable in parenthesis
().
It’s also possible to refer to variable substitutions by name in our format string, if we pass a mapping to the
% operator:
Example 5:
name = 'world' program ='python' print(‘Hello %(name)s! This is %(program)s.’%(name,program))
When we run the above program, the output will be
Hello world! This is python.
This makes our format strings easier to maintain and easier to modify in the future. We don’t have to worry about the order of the values that we’re passing with the order of the values that are referenced in the format string.
Str.format()
In this string formatting we use
format() function on a string object and braces
{}, the string object in
format() function is substituted in place of braces
{}. We can use the
format() function to do simple positional formatting, just like
% formatting.
Example 6:
name = 'world' print('Hello, {}'.format(name))
When we run the above program, the output will be
Hello,world
In this example we used braces
{} and
format() function to pass name object .We got the value of name in place of braces
{} in output.
We can refer to our variable substitutions by name and use them in any order we want. This is quite a powerful feature as it allows for re-arranging the order of display without changing the arguments passed to the format function.
Example 7:
name = 'world' program ='python' print('Hello {name}!This is{program}.'.format(name=name,program=program))
When we run the above program, the output will be
Hello world!This is python.
In this example we specified the variable substitutions place using the name of variable and pass the variable in
format().
Template Strings
Template Strings is simpler and less powerful mechanism of string interpolation. We need to import
Template class from Python’s built-in
string module to use it.
Example 8:
from string import Template name = 'world' program ='python' new = Template('Hello $name! This is $program.') print(new.substitute(name= name,program=program))
When we run the above program, the output will be
Hello world! This is python.
In this example we import
Template class from built-in
string module and made a template which we used to pass two variable.
Key Points to Remember:
- %-format method is very old method for interpolation and is not recommended to use as it decrease the code readability.
- In str.format() method we pass the string object to the format() function for string interpolation.
- In template method we make a template by importing template class from built in string module.
- Literal String Interpolation method is powerful interpolation method which is easy to use and increase the code readability. | https://www.programiz.com/python-programming/string-interpolation | CC-MAIN-2020-16 | refinedweb | 829 | 65.42 |
Colour/tint built-in Images
Hi All,
I'm 99.99% sure I've read something about changing the colour/tint of Pythonista's in-built images somewhere in this forum but I've spent a long time searching and can't find it. Hopefully this post will appear more easily in searches in the future with some appropriate tags for the benefit of others.
I thought changing the colour of the in-built images would be as simple as accessing the "tint" or "tin_color" attributes of the Image object but it isn't. It's actually a real pain in the proverbial. For those interested you can do so by using the ImageContext as follows:
import ui with ui.ImageContext(100, 100) as ctx: ui.set_color('red') img = ui.Image('iob:arrow_down_a_256').with_rendering_mode(ui.RENDERING_MODE_TEMPLATE).draw(0,0,100,100) img = ctx.get_image() img.show()
@m_doyle04, changing the tint works when the image is on a button, but not stand-alone:
from ui import * v = Button(image=Image('iob:arrow_down_a_256')) v.background_color = 'white' v.tint_color = 'red' v.present('sheet')
previous discussion here
@enceladus thanks, I knew I'd seen that somewhere before! | https://forum.omz-software.com/topic/5165/colour-tint-built-in-images | CC-MAIN-2021-49 | refinedweb | 193 | 51.04 |
Provided by: libgetdata-doc_0.10.0-5build2_all
NAME
gd_strings — retrieve a list of string values from a Dirfile
SYNOPSIS
#include <getdata.h> const char **gd_strings(DIRFILE *dirfile); const char **gd_mstrings(DIRFILE *dirfile, const char *parent);
DESCRIPTION
The gd_strings() function queries a dirfile(5) database specified by dirfile and compiles a read-only list of values of the all STRING type fields defined in the database, excluding /META subfields. The gd_mstrings() function produces the same list, but for STRING meta subfields of the indicated parent field. The dirfile argument must point to a valid DIRFILE object previously created by a call to gd_open(3).
RETURN VALUE
Upon successful completion, gd_strings() returns a pointer to an array of strings containing the values of all the STRING fields defined in the dirfile database. Similarly, gd_mstrings() returns a pointer to an array of strings containing the values of all the STRING metafields under parent. The returned array is terminated by a NULL pointer. A valid pointer is always returned if this function does not encounter an error. If there are no string values to return, a pointer to an array consisting of only the NULL pointer is returned. The array returned will be de-allocated by a call to gd_close(3) and should not be de- allocated by the caller. The list returned should not be assumed to be in any particular order, although it is guaranteed to be in the same order as the list of STRING fields returned by gd_field_list_by_type(3). The array is terminated by a NULL pointer. The number of strings in the array can be obtained from a call to gd_nfields_by_type(3). The caller may not modify any strings in the array, or the array itself. Doing so may cause database corruption. The pointer returned is guaranteed to be valid until gd_strings() or gd_mstrings() is called again with the same arguments, or until the array is de-allocated by a call to gd_close(3) or gd_discard(3). On error, these functions return NULL and store a negative-valued error code in the DIRFILE object which may be retrieved by a subsequent call to gd_error(3). Possible error codes are: GD_E_ALLOC The library was unable to allocate memory. GD_E_BAD_CODE (gd_mstrings() only) The supplied parent field code was not found, or referred to a metafield itself. GD_E_BAD_DIRFILE The supplied dirfile was invalid. A descriptive error string for the error may be obtained by calling gd_error_string(3).
HISTORY
The get_strings() function appeared in GetData-0.3.0. The get_mstrings() function appeared in GetData-0.4.0. In GetData-0.7.0, these functions were renamed to gd_strings() and gd_mstrings().
SEE ALSO
gd_error(3), gd_error_string(3), gd_field_list_by_type(3), gd_mstrings(3), gd_nfields_by_type(3), gd_open(3), gd_string(3), dirfile(5) | http://manpages.ubuntu.com/manpages/disco/man3/gd_strings.3.html | CC-MAIN-2020-34 | refinedweb | 455 | 63.59 |
For the past three months, we have been trying to master ns-3. We had our tryst with socket programming, grasped the basics of Waf, Mercurial, Doxygen, etc, and finally we ran our first ns-3 script. But our aim is not to just run ns-3 scripts; we are in hot pursuit of data which, when processed, will yield information that might even change the history of computer networks. Thus, the ultimate aim of any simulation is to obtain data and information. There is a large amount of data available from an ns-3 script, which might even lead to information overload. As mentioned earlier, there are three different ways to harness this data from an ns-3 simulation. The log data, the trace file data and the topology animation. Of the three methods, the log data provides the least amount of information and we have covered it already. What matters now are the other two methods the trace files and topology animation. The topology animation is important because it confirms the topology visually, and trace analysis is important because thats where all our data lies.
I just want to mention the relative importance of animation and trace analysis. In one of my previous incarnations (when I was helping students with their ns-2 projects), I found them mostly worried about the animation and not about the trace file. I have also come across academicians who pester their students with the animation part of the simulation and not the trace file which actually is a treasure trove of information. But from experience, let me tell you something the animation part of the simulation in ns-2 or ns-3 is not your priority. It might be entertaining to view the animation of your topology but the real purpose of it is to confirm the topology and nothing more. So our priority is the trace file and its analysis, but due to popular demand, I will first discuss the setting up of NetAnim and running topology animations.
Installing NetAnim
Last time when we tried to run NetAnim, we discovered that NetAnim was not yet installed in our system. The QT4 development package is required to install NetAnim. If you do not have Internet connectivity in your system then please install the QT4 development package with installation files downloaded from elsewhere. If you are following the installation instructions given here, it is mandatory to have Internet connectivity in your system. If your operating system is Debian or Ubuntu, then execute the following commands in a terminal:
apt-get install mercurial apt-get install qt4-dev-tools
If your operating system is Fedora, then type the following commands in the terminal:
yum install mercurial yum install qt4 yum install qt4-devel
The above commands will take care of the QT4 development package installation. Now it is time to install NetAnim. If you have followed the installation instructions provided in the previous parts of this series, then you will have a directory named ns/ns-allinone-3.22/netanim-3.105. Open a terminal in this directory and execute the following command:
make clean qmake NetAnim.pro make
The command qmake NetAnim.pro might give you the following error message qmake: command not found. The utility called qmake automates the generation of Makefiles. qmake is not supported in some systems, in which case, execute the command qmake-qt4 NetAnim.pro instead of qmake NetAnim.pro and then execute the command make. This will finish the installation of NetAnim in your system, and you can see the ELF file (executable file format in Linux similar to an exe file in Windows) of NetAnim in the directory ns/ns-allinone-3.22/netanim-3.105. Now, you can run an instance of NetAnim by executing the command ./NetAnim in a terminal opened in this directory. But we still dont have an XML based animation trace file which will be used by NetAnim to show us the topology animation. So it is time to go through another ns-3 script, which will give us the XML file to feed NetAnim.
The script for animation and tracing
The ns-3 script named netanim1.cc given below will generate an animation trace file with the extension .xml and an ASCII trace file with extension .tr. You can download the file netanim1.cc and all the other script files discussed in this article from opensourceforu.com/article_source_code/august2015/ns3.zip. Running the ns-3 script is similar to what we did before with just the names changed.
#include <iostream> #include ns3/core-module.h #include ns3/network-module.h #include ns3/internet-module.h #include ns3/point-to-point-module.h #include ns3/netanim-module.h #include ns3/applications-module.h #include ns3/point-to-point-layout-module.h using namespace ns3; using namespace std; int main(int argc, char *argv[]) { Config::SetDefault(ns3::OnOffApplication::PacketSize, UintegerValue (1024)); Config::SetDefault(ns3::OnOffApplication::DataRate, StringValue (100kb/s)); std::string animFile = netanim1.xml; PointToPointHelper pointToPointRouter; pointToPointRouter.SetDeviceAttribute(DataRate, StringValue(10Mbps)); pointToPointRouter.SetChannelAttribute(Delay, StringValue(1ms)); PointToPointHelper pointToPointEndNode; pointToPointEndNode.SetDeviceAttribute(DataRate, StringValue(10Mbps)); pointToPointEndNode.SetChannelAttribute(Delay, StringValue(1ms)); PointToPointDumbbellHelper d(1, pointToPointEndNode, 1, pointToPointEndNode, pointToPointRouter); InternetStackHelper stack; d.InstallStack (stack); d.AssignIpv4Addresses(Ipv4AddressHelper(192.168.100.0, 255.255.255.0), Ipv4AddressHelper(192.169.100.0, 255.255.255.0), Ipv4AddressHelper(192.170.100.0, 255.255.255.0)); OnOffHelper clientHelper(ns3::UdpSocketFactory, Address()); clientHelper.SetAttribute(OnTime,StringValue(ns3::UniformRandomVariable)); clientHelper.SetAttribute(OffTime,StringValue(ns3::UniformRandomVariable)); ApplicationContainer clientApps; AddressValue remoteAddress(InetSocketAddress(d.GetLeftIpv4Address(0), 5000)); clientHelper.SetAttribute(Remote, remoteAddress); clientApps.Add(clientHelper.Install(d.GetRight(0))); clientApps.Start(Seconds(1.0)); clientApps.Stop(Seconds(10.0)); d.BoundingBox(1, 1, 25, 25); AnimationInterface anim(animFile); anim.EnablePacketMetadata(); Ipv4GlobalRoutingHelper::PopulateRoutingTables(); AsciiTraceHelper ascii; pointToPointEndNode.EnableAsciiAll(ascii.CreateFileStream(netanim1.tr)); Simulator::Run(); cout << \n\nAnimation XML file << animFile.c_str()<< created\n\n; Simulator::Destroy(); return 0; }
Even though we are concerned with animation and trace analysis in this article, I will briefly explain the program, in general. The script uses yet another helper class called PointToPointDumbbellHelper to generate the topology. This class will help us create dumbbell shaped topologies. But the line PointToPointDumbbellHelper d(1, pointToPointEndNode, 1, pointToPointEndNode, pointToPointRouter); creates a topology with just one end node (leaf node) on either side of the handle of the dumbbell formed by two nodes. All the other lines of code in the script are somewhat similar to the ones we have seen in the previous ns-3 script, except for those lines that are responsible for generating the XML based animation trace file for NetAnim. In the next section, we will discuss these lines.
NetAnim execution
In the simulation script netanim1.cc the line std::string animFile = netanim1.xml; creates an animation trace file with the name netanim1.xml. After generating the topology, the line d.BoundingBox(1, 1, 25, 25); defines the Cartesian coordinates of the animation window. The line AnimationInterface anim(animFile); sends the data to the file netanim.xml required by NetAnim for displaying the topology animation. Finally, the line anim.EnablePacketMetadata(); adds packet metadata related information to the animation trace file. This line of code is optional. When you execute the ns-3 script, the file netanim1.xml is generated in the directory ns/ns-allinone-3.22/ns-3.22. Now open this file in the NetAnim window by clicking the file menu, and after opening the file, start the topology animation by pressing the Play button. Figure 1 shows the network traffic in the NetAnim window.
If you observe the NetAnim window carefully, you will see different icons denoting operations like start, stop, reload, display packet metadata, zoom in, zoom out, etc. Play around with those buttons for some time till you feel confident about working with NetAnim. The top right corner displays the current simulation time. The speed of the animation can be increased or decreased by moving a button to the left or right. This button is placed at the top panel between the words fast and slow. If you observe Figure 1, you will see that I have reduced the speed of the animation to the minimum possible value to take the screenshot. Now that we have some knowledge of NetAnim, it is time to deal with the ASCII trace files.
ASCII trace files demystified
The three important aspects of ns-3 are simulating the correct topology, understanding the structure of an ASCII trace file, and making changes to the existing ns-3 source files to suit your proposed protocol or architecture. In ns-3, almost all the information is available in the form of an ASCII trace file. Getting results from an ns-3 simulation involves deciphering the trace file. The simulation script netanim1.cc also produces an ASCII trace file named netanim1.tr. The lines of code AsciiTraceHelper ascii; and pointToPointEndNode.EnableAsciiAll(ascii.CreateFileStream(netanim1.tr)); are responsible for generating the trace file. The trace file netanim1.tr is a large file with a number of lines of data. Each line in the trace file corresponds to a trace event. Trace events denote what happens to a packet in the transmit queue of a node. The important events happening in the transmit queue include packet enqueuing denoted by +, packet dequeuing denoted by -, packet reception denoted by r and packet drop denoted by d. Those who are familiar with ns-2 might remember that these symbols carry the same meaning in ns-2 also. Now consider the code shown below. It is actually a single line of data from the trace file netanim1.tr divided into a number of lines for better understanding.
+ 2.03474 /NodeList/1/DeviceList/0/$ns3::PointToPointNetDevice/TxQueue/Enqueue ns3::PppHeader ( Point-to-Point Protocol: IP (0x0021)) ns3::Ipv4Header ( tos 0x0 DSCP Default ECN Not-ECT ttl 63 id 0 protocol 17 offset (bytes) 0 flags [none] length: 1052 192.169.100.1 > 192.168.100.1) ns3::UdpHeader ( length: 1032 49153 > 5000) Payload (size=1024)
As mentioned earlier, the trace event shown above corresponds to a packet enqueue operation because the line starts with the symbol +. The second line tells us the time at which the event happened. The third and fourth lines tells us about the node and device in which the event happened. We now know that the event happened in Device 0 of Node 1. Lines 5 and 6 tell us that a point-to-point protocol is used in this example. Lines 7 to 10 tell us about the IPv4 header details used in this example. We also get the IP addresses of the node from which the packet originates and the node to which it is destined. In this case, the packet originates from a node with IP address 192.169.100.1 and is destined to a node with IP address 192.168.100.1. Lines 11 and 12 tell us that about the UDP header. The thirteenth line tells us about the size of the payload. In this example, the size of the payload is 1024 bytes.
So now we have some idea about the data provided by the trace file netanim1.tr. But alas, the structure of a trace file is not always uniform. When a simulation involves different protocols and different applications, the resulting trace file also varies. So it is not possible to have a single formula to understand the different fields of an ns-3 trace file. Understanding the meaning of the different fields of an ns-3 trace file is one of the skills you have to acquire in the long journey to master ns-3. But whatever the structure of the trace file is, it will contain a number of fields representing some physical parameter. All you have to do is find out which field represents which physical parameter. But even then, you need some tools to analyse the trace file. The next section briefly discusses a few techniques to extract information from a trace file.
Tools to process trace files
Sometimes, there are millions of lines of data in the trace file. Then, manual processing becomes impossible. In such situations, we have different techniques to extract information from the trace file. Let us consider the simple problem of finding the number of UDP packets received at node 2, the receiver node. We will solve this problem using different approaches. One possible method is to use Linux commands. Highly useful Linux commands for text processing include grep, cut, paste, etc. The grep command is used to select lines from a text containing a particular pattern. The cut command is used to extract specific columns of data from a multi-column data file. The command paste is used for creating multi-column output files, especially useful for producing graphs. Now consider the one-liner given below.
grep ^r netanim1.tr | grep NodeList/2 | wc -l
This will give you the number of packets received at node 2. Here, we use pipelined Linux commands. First, we select all the lines starting with the letter r the event corresponding to packet reception from the trace file. Then from this, we select the lines containing the pattern /Node/2. And finally, we count the number of selected lines with the command wc l. But it may not always be possible to extract information using Linux commands alone. In many situations we need more powerful tools. One such tool is a programming language called AWK, optimised for text processing. Consider the AWK script packet_count.awk given below.
#!/bin/awk -f BEGIN { count = 0; } { if($1 == r && $3 ~ /NodeList\/2\//) { count++; } } END { print \n\tNo of packets received at the destination = ,count,\n\n; }
This AWK script, when executed, will give you the number of packets received at node 2. In an AWK script, the BEGIN section will be processed only oncein the beginning. The END section will also be executed just once at the end. The code given between BEGIN and END sections, enclosed inside the curly bracket, will be executed on every line of the trace file being processed. Each line is divided into a number of fields with whitespace as the character deciding the ending of a field and the beginning of a new field. In the script, $1 contains the first column and this column denotes the event type. $3 contains the third column of the trace file and this corresponds to the third line of the trace file sample shown earlier. So the AWK script selects only those lines that denote a packet reception event at node 2. A counter is incremented if such an event happens and, finally, this value is displayed in the END section.
Even though the language AWK is highly suitable for text processing, its popularity has been waning over the years. As far as text processing is concerned, a general-purpose language called Perl has now taken centre stage. It will be highly beneficial in the long run if you select Perl rather than AWK for text processing. Given below is a Perl script packet_count.pl for finding the number of packets received at node 2.
#!/bin/perl $infile=$ARGV[0]; open (DATA,<$infile) || die File $infile Not Found; $counter=0; while(<DATA>) { @x = split( ); if($x[0] eq r) { if($x[2] =~ /NodeList\/2/) { $counter=$counter+1; } } } printf \n\tNo of packets received at the destination = %d \n\n, $counter; close DATA; exit(0);
The working of the Perl script is similar to that of the AWK script. In the script given above, [email protected] is an array variable containing a line of data from the trace file. The variable $x[0] contains the first field of a line of data, which is regarding the event type. Similarly, $x[2] contains the third column of the trace file, and this corresponds to the third line of the trace file sample. Thus the Perl script also counts and reports the number of packets received at node 2. The execution of the three different scripts discussed in this section is shown in Figure 2.
But the scripts will get executed only if your system contains executable files of Perl and AWK. Open a terminal and execute the command echo $PATH. A number of directory names with a complete paths will be displayed. Make sure that the executable files of AWK and Perl are available in any one of these directories. The executable files are named awk and perl. If your system doesnt have these files, install AWK and Perl.
Connect With Us | http://opensourceforu.com/2015/09/handling-information-overload-in-ns-3/ | CC-MAIN-2017-13 | refinedweb | 2,756 | 57.16 |
searching module¶
This module contains classes and functions related to searching the index.
Searching classes¶
- class
whoosh.searching.
Searcher(reader, weighting=<class 'whoosh.scoring.BM25F'>, closereader=True, fromindex=None, parent=None)¶
Wraps an
IndexReaderobject and provides methods for searching the index.
collector(limit=10, sortedby=None, reverse=False, groupedby=None, collapse=None, collapse_limit=1, collapse_order=None, optimize=True, filter=None, mask=None, terms=False, maptype=None, scored=True)¶
Low-level method: returns a configured
whoosh.collectors.Collectorobject based on the given arguments. You can use this object with
Searcher.search_with_collector()to search.
See the documentation for the
Searcher.search()method for a description of the parameters.
This method may be useful to get a basic collector object and then wrap it with another collector from
whoosh.collectorsor with a custom collector of your own:
# Equivalent of # results = mysearcher.search(myquery, limit=10) # but with a time limt... # Create a TopCollector c = mysearcher.collector(limit=10) # Wrap it with a TimeLimitedCollector with a time limit of # 10.5 seconds from whoosh.collectors import TimeLimitedCollector c = TimeLimitCollector(c, 10.5) # Search using the custom collector results = mysearcher.search_with_collector(myquery, c)
correct_query(q, qstring, correctors=None, terms=None, maxdist=2, prefix=0, aliases=None)¶
Returns a corrected version of the given user query using a default
whoosh.spelling.ReaderCorrector.
The default:
- Corrects any words that don’t appear in the index.
- Takes suggestions from the words in the index. To make certain fields use custom correctors, use the
correctorsargument to pass a dictionary mapping field names to
whoosh.spelling.Correctorobjects.
Expert users who want more sophisticated correction behavior can create a custom
whoosh.spelling.QueryCorrectorand use that instead of this method.
Returns a
whoosh.spelling.Correctionobject with a
queryattribute containing the corrected
whoosh.query.Queryobject and a
stringattributes containing the corrected query string.
>>> from whoosh import qparser, highlight >>>>> q = qparser.QueryParser("text", myindex.schema) >>> mysearcher = myindex.searcher() >>> correction = mysearcher().correct_query(q, qtext) >>> correction.query <query.And ...> >>> correction.string 'mary "little lamb"' >>> mysearcher.close()
You can use the
Correctionobject’s
format_stringmethod to format the corrected query string using a
whoosh.highlight.Formatterobject. For example, you can format the corrected string as HTML, emphasizing the changed words.
>>> hf = highlight.HtmlFormatter(little</strong> lamb"'
docs_for_query(q, for_deletion=False)¶
Returns an iterator of document numbers for documents matching the given
whoosh.query.Queryobject.
document(**kw)¶
Convenience method returns the stored fields of a document matching the given keyword arguments, where the keyword keys are field names and the values are terms that must appear in the field.
This method is equivalent to:
searcher.stored_fields(searcher.document_number(<keyword args>))
Where Searcher.documents() returns a generator, this function returns either a dictionary or None. Use it when you assume the given keyword arguments either match zero or one documents (i.e. at least one of the fields is a unique key).
>>> stored_fields = searcher.document(path=u"/a/b") >>> if stored_fields: ... print(stored_fields['title']) ... else: ... print("There is no document with the path /a/b")
document_number(**kw)¶
Returns the document number of the document matching the given keyword arguments, where the keyword keys are field names and the values are terms that must appear in the field.
>>> docnum = searcher.document_number(path=u"/a/b")
Where Searcher.document_numbers() returns a generator, this function returns either an int or None. Use it when you assume the given keyword arguments either match zero or one documents (i.e. at least one of the fields is a unique key).
document_numbers(**kw)¶
Returns a generator of the document numbers for documents matching the given keyword arguments, where the keyword keys are field names and the values are terms that must appear in the field. If you do not specify any arguments (
Searcher.document_numbers()), this method will yield all document numbers.
>>> docnums = list(searcher.document_numbers(emailto="matt@whoosh.ca"))
documents(**kw)¶
Convenience method returns the stored fields of a document matching the given keyword arguments, where the keyword keys are field names and the values are terms that must appear in the field.
Returns a generator of dictionaries containing the stored fields of any documents matching the keyword arguments. If you do not specify any arguments (
Searcher.documents()), this method will yield all documents.
>>> for stored_fields in searcher.documents(emailto=u"matt@whoosh.ca"): ... print("Email subject:", stored_fields['subject'])
idf(fieldname, text)¶
Calculates the Inverse Document Frequency of the current term (calls idf() on the searcher’s Weighting object).
key_terms(docnums, fieldname, numterms=5, model=<class 'whoosh.classify.Bo1Model'>, normalize=True)¶
Returns the ‘numterms’ most important terms from the documents listed (by number) in ‘docnums’. You can get document numbers for the documents your interested in with the document_number() and document_numbers() methods.
“Most important” is generally defined as terms that occur frequently in the top hits but relatively infrequently in the collection as a whole.
>>> docnum = searcher.document_number(path=u"/a/b") >>> keywords_and_scores = searcher.key_terms([docnum], "content")
This method returns a list of (“term”, score) tuples. The score may be useful if you want to know the “strength” of the key terms, however to just get the terms themselves you can just do this:
>>> kws = [kw for kw, score in searcher.key_terms([docnum], "content")]
key_terms_from_text(fieldname, text, numterms=5, model=<class 'whoosh.classify.Bo1Model'>, normalize=True)¶
Return the ‘numterms’ most important terms from the given text.
more_like(docnum, fieldname, text=None, top=10, numterms=5, model=<class 'whoosh.classify.Bo1Model'>, normalize=False, filter=None)¶
Returns a
Resultsobject containing documents similar to the given document, based on “key terms” in the given field:
# Get the ID for the document you're interested in docnum = search.document_number(path=u"/a/b/c") r = searcher.more_like(docnum) print("Documents like", searcher.stored_fields(docnum)["title"]) for hit in r: print(hit["title"])
postings(fieldname, text, weighting=None, qf=1)¶
Returns a
whoosh.matching.Matcherfor the postings of the given term. Unlike the
whoosh.reading.IndexReader.postings()method, this method automatically sets the scoring functions on the matcher from the searcher’s weighting object.
reader()¶
Returns the underlying
IndexReader.
refresh()¶
Returns a fresh searcher for the latest version of the index:
my_searcher = my_searcher.refresh()
If the index has not changed since this searcher was created, this searcher is simply returned.
This method may CLOSE underlying resources that are no longer needed by the refreshed searcher, so you CANNOT continue to use the original searcher after calling
refresh()on it.
search(q, **kwargs)¶
Runs a
whoosh.query.Queryobject on this searcher and returns a
Resultsobject. See How to search for more information.
This method takes many keyword arguments (documented below).
See Sorting and faceting for information on using
sortedbyand/or
groupedby. See Collapsing results for more information on using
collapse,
collapse_limit, and
collapse_order.
search_page(query, pagenum, pagelen=10, **kwargs)¶
This method is Like the
Searcher.search()method, but returns a
ResultsPageobject. This is a convenience function for getting a certain “page” of the results for the given query, which is often useful in web search interfaces.
For example:
querystring = request.get("q") query = queryparser.parse("content", querystring) pagenum = int(request.get("page", 1)) pagelen = int(request.get("perpage", 10)) results = searcher.search_page(query, pagenum, pagelen=pagelen) print("Page %d of %d" % (results.pagenum, results.pagecount)) print("Showing results %d-%d of %d" % (results.offset + 1, results.offset + results.pagelen + 1, len(results))) for hit in results: print("%d: %s" % (hit.rank + 1, hit["title"]))
(Note that results.pagelen might be less than the pagelen argument if there aren’t enough results to fill a page.)
Any additional keyword arguments you supply are passed through to
Searcher.search(). For example, you can get paged results of a sorted search:
results = searcher.search_page(q, 2, sortedby="date", reverse=True)
Currently, searching for page 100 with pagelen of 10 takes the same amount of time as using
Searcher.search()to find the first 1000 results. That is, this method does not have any special optimizations or efficiencies for getting a page from the middle of the full results list. (A future enhancement may allow using previous page results to improve the efficiency of finding the next page.)
This method will raise a
ValueErrorif you ask for a page number higher than the number of pages in the resulting query.
search_with_collector(q, collector, context=None)¶
Low-level method: runs a
whoosh.query.Queryobject on this searcher using the given
whoosh.collectors.Collectorobject to collect the results:
myquery = query.Term("content", "cabbage") uc = collectors.UnlimitedCollector() tc = TermsCollector(uc) mysearcher.search_with_collector(myquery, tc) print(tc.docterms) print(tc.results())
Note that this method does not return a
Resultsobject. You need to access the collector to get a results object or other information the collector might hold after the search.
suggest(fieldname, text, limit=5, maxdist=2, prefix=0)¶
Returns a sorted list of suggested corrections for the given mis-typed word
textbased on the contents of the given field:
>>> searcher.suggest("content", "specail") ["special"]
This is a convenience method. If you are planning to get suggestions for multiple words in the same field, it is more efficient to get a
Correctorobject and use it directly:
corrector = searcher.corrector("fieldname") for word in words: print(corrector.suggest(word))
Results classes¶
- class
whoosh.searching.
Results(searcher, q, top_n, docset=None, facetmaps=None, runtime=0, highlighter=None)¶
This object is returned by a Searcher. This object represents the results of a search query. You can mostly use it as if it was a list of dictionaries, where each dictionary is the stored fields of the document at that position in the results.
Note that a Results object keeps a reference to the Searcher that created it, so keeping a reference to a Results object keeps the Searcher alive and so keeps all files used by it open.
estimated_length()¶
The estimated maximum number of matching documents, or the exact number of matching documents if it’s known.
estimated_min_length()¶
The estimated minimum number of matching documents, or the exact number of matching documents if it’s known.
extend(results)¶
Appends hits from ‘results’ (that are not already in this results object) to the end of these results.
fields(n)¶
Returns the stored fields for the document at the
nth position in the results. Use
Results.docnum()if you want the raw document number instead of the stored fields.
groups(name=None)¶
If you generated facet groupings for the results using the groupedby keyword argument to the
search()method, you can use this method to retrieve the groups. You can use the
facet_names()method to get the list of available facet names.
>>> results = searcher.search(my_query, groupedby=["tag", "price"]) >>> results.facet_names() ["tag", "price"] >>> results.groups("tag") {"new": [12, 1, 4], "apple": [3, 10, 5], "search": [11]}
If you only used one facet, you can call the method without a facet name to get the groups for the facet.
>>> results = searcher.search(my_query, groupedby="tag") >>> results.groups() {"new": [12, 1, 4], "apple": [3, 10, 5, 0], "search": [11]}
By default, this returns a dictionary mapping category names to a list of document numbers, in the same relative order as they appear in the results.
>>> results = mysearcher.search(myquery, groupedby="tag") >>> docnums = results.groups() >>> docnums['new'] [12, 1, 4]
You can then use
Searcher.stored_fields()to get the stored fields associated with a document ID.
If you specified a different
maptypefor the facet when you searched, the values in the dictionary depend on the
whoosh.sorting.FacetMap.
>>> myfacet = sorting.FieldFacet("tag", maptype=sorting.Count) >>> results = mysearcher.search(myquery, groupedby=myfacet) >>> counts = results.groups() {"new": 3, "apple": 4, "search": 1}
has_exact_length()¶
Returns True if this results object already knows the exact number of matching documents.
has_matched_terms()¶
Returns True if the search recorded which terms matched in which documents.
>>> r = searcher.search(myquery) >>> r.has_matched_terms() False >>>
key_terms(fieldname, docs=10, numterms=5, model=<class 'whoosh.classify.Bo1Model'>, normalize=True)¶
Returns the ‘numterms’ most important terms from the top ‘docs’ documents in these results. “Most important” is generally defined as terms that occur frequently in the top hits but relatively infrequently in the collection as a whole.
matched_terms()¶
Returns the set of
("fieldname", "text")tuples representing terms from the query that matched one or more of the TOP N documents (this does not report terms for documents that match the query but did not score high enough to make the top N results). You can compare this set to the terms from the original query to find terms which didn’t occur in any matching documents.
This is only valid if you used
terms=Truein the search call to record matching terms. Otherwise it will raise an exception.
>>> q = myparser.parse("alfa OR bravo OR charlie") >>> results = searcher.search(q, terms=True) >>> results.terms() set([("content", "alfa"), ("content", "charlie")]) >>> q.all_terms() - results.terms() set([("content", "bravo")])
score(n)¶
Returns the score for the document at the Nth position in the list of ranked documents. If the search was not scored, this may return None.
scored_length()¶
Returns the number of scored documents in the results, equal to or less than the
limitkeyword argument to the search.
>>> r = mysearcher.search(myquery, limit=20) >>> len(r) 1246 >>> r.scored_length() 20
This may be fewer than the total number of documents that match the query, which is what
len(Results)returns.
upgrade(results, reverse=False)¶
Re-sorts the results so any hits that are also in ‘results’ appear before hits not in ‘results’, otherwise keeping their current relative positions. This does not add the documents in the other results object to this one.
- class
whoosh.searching.
Hit(results, docnum, pos=None, score=None)¶
Represents a single search result (“hit”) in a Results object.
This object acts like a dictionary of the matching document’s stored fields. If for some reason you need an actual
dictobject, use
Hit.fields()to get one.
>>> r = searcher.search(query.Term("content", "render")) >>> r[0] < Hit {title = u"Rendering the scene"} > >>> r[0].rank 0 >>> r[0].docnum == 4592 True >>> r[0].score 2.52045682 >>> r[0]["title"] "Rendering the scene" >>> r[0].keys() ["title"]
highlights(fieldname, text=None, top=3, minscore=1)¶
Returns highlighted snippets from the given field:
r = searcher.search(myquery) for hit in r: print(hit["title"]) print(hit.highlights("content"))
See How to create highlighted search result excerpts.
To change the fragmeter, formatter, order, or scorer used in highlighting, you can set attributes on the results object:
from whoosh import highlight results = searcher.search(myquery, terms=True) results.fragmenter = highlight.SentenceFragmenter()
...or use a custom
whoosh.highlight.Highlighterobject:
hl = highlight.Highlighter(fragmenter=sf) results.highlighter = hl
matched_terms()¶
Returns the set of
("fieldname", "text")tuples representing terms from the query that matched in this document. You can compare this set to the terms from the original query to find terms which didn’t occur in this document.
This is only valid if you used
terms=Truein the search call to record matching terms. Otherwise it will raise an exception.
>>> q = myparser.parse("alfa OR bravo OR charlie") >>> results = searcher.search(q, terms=True) >>> for hit in results: ... print(hit["title"]) ... print("Contains:", hit.matched_terms()) ... print("Doesn't contain:", q.all_terms() - hit.matched_terms())
more_like_this(fieldname, text=None, top=10, numterms=5, model=<class 'whoosh.classify.Bo1Model'>, normalize=True, filter=None)¶
Returns a new Results object containing documents similar to this hit, based on “key terms” in the given field:
r = searcher.search(myquery) for hit in r: print(hit["title"]) print("Top 3 similar documents:") for subhit in hit.more_like_this("content", top=3): print(" ", subhit["title"])
- class
whoosh.searching.
ResultsPage(results, pagenum, pagelen=10)¶
Represents a single page out of a longer list of results, as returned by
whoosh.searching.Searcher.search_page(). Supports a subset of the interface of the
Resultsobject, namely getting stored fields with __getitem__ (square brackets), iterating, and the
score()and
docnum()methods.
The
offsetattribute contains the results number this page starts at (numbered from 0). For example, if the page length is 10, the
offsetattribute on the second page will be
10.
The
pagecountattribute contains the number of pages available.
The
pagenumattribute contains the page number. This may be less than the page you requested if the results had too few pages. For example, if you do:
ResultsPage(results, 5)
but the results object only contains 3 pages worth of hits,
pagenumwill be 3.
The
pagelenattribute contains the number of results on this page (which may be less than the page length you requested if this is the last page of the results).
The
totalattribute contains the total number of hits in the results.
>>> mysearcher = myindex.searcher() >>> pagenum = 2 >>> page = mysearcher.find_page(pagenum, myquery) >>> print("Page %s of %s, results %s to %s of %s" % ... (pagenum, page.pagecount, page.offset+1, ... page.offset+page.pagelen, page.total)) >>> for i, fields in enumerate(page): ... print("%s. %r" % (page.offset + i + 1, fields)) >>> mysearcher.close()
To set highlighter attributes (for example
formatter), access the underlying
Resultsobject:
page.results.formatter = highlight.UppercaseFormatter()
Exceptions¶
- exception
whoosh.searching.
NoTermsException¶
Exception raised you try to access matched terms on a
Resultsobject was created without them. To record which terms matched in which document, you need to call the
Searcher.search()method with
terms=True. | https://whoosh.readthedocs.io/en/latest/api/searching.html | CC-MAIN-2018-39 | refinedweb | 2,847 | 50.94 |
, ...
OPENDEV(3) OpenBSD Programmer's Manual OPENDEV(3)
NAME
opendev - short form device open routine
SYNOPSIS
#include <util.h>
int
opendev(char *path, int oflags, int dflags, char **realpath);
DESCRIPTION
The opendev() function opens a device using the ``short form'' name.
This is typically ``sd0'' or ``sd0c'', for instance, which will be ex-
panded to /dev/rsd0c on most architectures.
The oflags are the same as the flags passed to open(2).
The dflags are specified by OR'ing the following values:
OPENDEV_PART attempt to open the raw partition during expansion
OPENDEV_DRCT attempt to open the device itself during expansion
If realpath is not NULL, it is modified to point at the fully expanded
device name.
RETURN VALUES
The opendev() return value and errors are the same as the return value
and errors of open(2).
SEE ALSO
open(2)
HISTORY
The opendev() function first appeared in OpenBSD 1.2.
OpenBSD 2.6 June 17, 1996 1 | http://www.rocketaware.com/man/man3/opendev.3.htm | crawl-002 | refinedweb | 157 | 53.1 |
Couple weeks ago when I was in Holland speaking at SDC an attendee asked me how he could call methods in an Office solution (VSTO) from VBA functions defined in a document and vice versa. I thought I’d follow up with a post on how to do this, but first a little background on why this architecture would make sense.
There are many reasons why you would build an Office solution using Visual Studio (VSTO) as opposed to a pure VBA solution. Andri Yadi, VSTO MVP, wrote a great piece on his blog a while back explaining the benefits of VSTO compared to VBA. He broke it down into 10 areas, of which the main benefits are the tools and designers you have available in Visual Studio as well as the entire .NET framework and modern languages at your disposal.
However, there are probably many VBA assets that people in your company have already written, like complex algorithms or other business logic that you really don’t want to rewrite. Or maybe you still want to allow users to customize these functions in the VBA editor but it’s necessary for you to call them from your .NET code.
Likewise, you may want to develop a customization that takes advantage of WCF services or a WPF UI, modern language features, or any other feature of the .NET framework that would be difficult or impossible to do in VBA, and you want the user to be able to access these methods from their VBA functions. The attendee at SDC didn’t go into much detail on what his Office customization was doing exactly but he wanted to make some of his public methods available to his VBA users and this makes sense in a lot of situations. Luckily Visual Studio makes this very easy to do.
Creating an Excel Document Customization
For this example I’ll create an Excel document customization that accesses data through a WCF service and does some calculations on that data. The calculations, however, will be in VBA. To access the remote data over the internet I’ll create an ADO.NET Data Service. I want to pull up data in a Northwind view called Sales Totals By Amount. I’ve shown how to create an ADO.NET Data Service many times before so I won’t go into too much detail here. Please refer to the steps shown in the Using ADO.NET Data Services article. The only difference in this case is I selected the View Sales Totals By Amount into my Entity Framework model when I performed that step.
I have an Excel macro-enabled workbook that already has a simple VBA function that Sums all the columns below the first row. The function is sitting in a module called MyFunctions.
To create the new Excel workbook customization I’m going to add a new project to my solution and select Office 2007 Excel Workbook. Next it will ask if you want to create a new document or use an existing one, here’s where I’ll specify the macro-enabled workbook I already have above.
Next Add a Service Reference to the ADO.NET Data Service (which I called NorthwindReportService) just like I showed here and copy the URI into your clipboard. Then create a setting to store the URI, just double-click on My Project (Properties in C#) and select the Settings tab and enter an application scope property called ServiceURI.
When you add the service reference this generates client-side proxy types that you can use. I’m going to bind the data returned from Sales_Totals_by_Amount to an Excel ListObject. Open the Data Source window (Data –> Show Data Sources) and then add a new data source (Data –> Add New Data Source…). In the Data Source Connection Wizard select Object, then Next, then expand the types in your project’s NorthwindReportService namespace. Select Sales_Totals_by_Amount and then click Finish and you will see the type’s properties appear in the Data Sources Window:
Double-click on Sheet1 in the project and drag the Sales_Totals_by_Amount from the Data Sources window onto the second row of the sheet (our macro is going to sum into the first row so we want to place the data starting on the second row). This will automatically set up a BindingSource in the system tray that we will use to set our list of data coming from the service. If you are familiar with Winforms development this should seem very familiar. The ListObject is the main data object you work with in Excel solutions. For this example I’m going to select the OrderId column, right-click and then Delete. I’ll do the same to the ShippedDate column because I only want to display the CompanyName and SaleAmount for this example. Finally I’ll set the formatting (Home Tab on the Excel Designer) to Currency for the first cell.
Now we’re ready to write some code to load our data. Right-click on ThisWorkbook and select View Code. Here I’m going to create a Friend ReadOnly Property so we can easily access the service reference from anywhere in the project. I’m making this Friend so that it won’t be visible outside of the .NET assembly. I’m also creating a Public method that gets the data from our service and optionally accepts a Company Name. The results are then set to the DataSource of the ListObject’s BindingSource on Sheet1:
Imports VBATest.NorthwindReportService Public Class ThisWorkbook Private _ReportService As New NorthwindEntities(New Uri(My.Settings.ServiceURI)) Friend ReadOnly Property ReportService() As NorthwindEntities Get Return _ReportService End Get End Property Public Sub GetData(Optional ByVal companyName = "") Try If Globals.Sheet1 IsNot Nothing Then Dim results As IEnumerable(Of Sales_Totals_by_Amount) If companyName = "" Then results = Me.ReportService.Sales_Totals_by_Amount Else results = From s In Me.ReportService.Sales_Totals_by_Amount _ Where s.CompanyName.StartsWith(companyName) End If Globals.Sheet1.Sales_Totals_by_AmountBindingSource.DataSource = results.ToList() End If Catch ex As Exception 'TODO: Error Handling MsgBox(ex.ToString()) End Try End Sub Private Sub ThisWorkbook_Startup() Handles Me.Startup End Sub Private Sub ThisWorkbook_Shutdown() Handles Me.Shutdown End Sub End Class
Calling VBA from VSTO
Next I want to create a button on the ribbon that will first call the GetData method, then select the first cell in Sheet1, and finally call the VBA function SumBelow. In order to call a VBA method from VSTO you call Globals.ThisWorkbook.Application.Run passing it the full name to the VBA method. For this example that would be VBATest.xlsm!MyFunctions.SumBelow.
Add a New Item to the project and select Office, Ribbon (Visual Designer) and then drag a Button from the Office Ribbon Controls to the Group and Label it “Get Data”. I also specified an OfficeImageId to make it look pretty. (BTW, a nice way to browse the Office Images is to install the VSTO Power Tools and install the RibbonID Add-in like Ty shows in this video.)
Double-click on the Get Data button to add a click event handler and we’ll write the following code to load all the data and then call the VBA function. You need to make sure you set up proper error handling here because if the VBA function is removed or renamed the code here will fail. This code will also fail if the appropriate access is not granted to VBA macros in Excel. By default, VBA macros are not enabled but you can enable them on a per workbook basis (there’s a button at the top of the first sheet when you run it). This scenario assumes you have existing VBA code that has permission to run and you’re now calling those existing functions from VSTO.
Imports VBATest.NorthwindReportService Imports Microsoft.Office.Tools.Ribbon Public Class Ribbon1 Private Sub Button1_Click() Handles Button1.Click 'load the data from the service Globals.ThisWorkbook.GetData() Try 'Make sure the first cell is selected Globals.Sheet1.Range("A1").Select() 'Run the VBA function. This will result in a runtime error if the function ' is removed or renamed or not allowed to run so make sure to provide
' adequate error handling. Globals.ThisWorkbook.Application.Run("VBATest.xlsm!MyFunctions.SumBelow") Catch ex As Exception 'Todo: Error handling MsgBox(ex.ToString()) End Try End Sub End Class
Hit F5 to run. If you see a Security Warning (the default) that Macros are disabled, then just click Options and select “Enable this content”. Select the Add-Ins tab on the Ribbon and click the GetData button to see the data get loaded from the service and then the SumBelow VBA function will be called which will auto-sum the SaleAmount field and show the total in the first row.
Calling VSTO methods from VBA
As you can see it’s really easy to call VBA code from your Office solution in Visual Studio (VSTO) but it’s also fragile because of the late-bound architecture and the requirement that macros be enabled for the Workbook. Like all late-bound code, you need to have adequate error handling to prevent crashes. Much less fragile is calling VSTO methods from VBA functions because these methods are compiled into your .NET assembly and exposed via COM-interop which makes them available to VBA.
If we go back to our project and double-click on ThisWorkbook and look in the Properties window, you should see a property called EnableVbaCallers. Setting that Property to True will expose all Public methods in the ThisWorkbook class via COM to VBA.
If you now go back into the code-behind for ThisWorkbook you will see some COM attributes added to the class:
<Microsoft.VisualBasic.ComClassAttribute()> _ <System.Runtime.InteropServices.ComVisibleAttribute(True)> _ Public Class ThisWorkbook
...
Now we can call the GetData method from VBA code. Hit F5 to run and enable the macros (if asked) on the Workbook again. Select the Developer tab and launch the Visual Basic editor. (If you don’t see a developer tab click the Office icon – the globe in the left-hand corner – select Excel Options, and then on the Popular tab check the “Show Developer tab in the Ribbon”.)
Double-click on the ThisWorkbook and you will see that Visual Studio added a property to our VBA code for us called CallVSTOAssembly. This allows us to call any public method we defined in the ThisWorkbook class back in Visual Studio. Let’s add another function to the MyFunctions module that collects input from the user on the company name to look up and then fetches the data by calling the GetData method in .NET.
Save your code here and close the VBA Editor. Now back on the Developer tab on the Ribbon select Macros and then you should see the one we just wrote called GetDataAndSumBelow, select it and click Run. It will prompt for a company name (just type ‘S’ for instance) and it will run the ADO.NET Data Service query via the call to the .NET GetData method and then will return and call the SumBelow VBA function. Cool!
BUT WAIT… DON’T CLOSE EXCEL YET!
Tips Editing VBA Code when Debugging
Because we wrote the second VBA function above while we were in debug mode in Visual Studio once we close Excel we will lose all the VBA code we wrote when we debug again.
Because of the way Visual Studio works with Office solutions, we aren’t actually editing the xlsm file in the project, we’re editing the running xlsm file in the \bin directory that has the VSTO solution attached. You cannot just copy this one in the \bin folder back into the project otherwise Visual Studio will report an error that a customization is already attached to the document when you compile again. So what do we do?
There’s probably other ways to do this but what I found the easiest was to open the Visual Basic editor again, select the MyFunctions module where all my code is stored and then right-click and select “Export File”. This will allow you to save the code outside the Workbook. Then when you debug again you can just import it by right-clicking again (delete the current one first).
When you’re finally satisfied with your VBA – VSTO code interop, close Visual Studio and open the .xlsm file in the project directory (not the \bin) and re-import your code again into that version. Then restart Visual Studio and it will be in there when you start debugging again. I find this easier than copying my code into the clipboard, closing VS, modifying the document, reopening VS every time. Just be aware of what version of the document you’re modifying when you tweak your VBA code and you should be OK.
I’ve uploaded the code for this example onto Code Gallery so have a look:
More Resources
For more information on VBA – VSTO interop with Visual Studio please check out the following resources:
- MSDN Magazine: Extend Your VBA Code With VSTO
- MSDN Article: VBA Interoperability with Visual Studio Tools for the Office System (3.0)
- Visual How To: Interoperability Between VBA and Visual Studio Tools for the Office System (3.0)
- How Do I: Call VSTO Code from VBA?
- How Do I: Enable an Office Application Add-In using a VSTO Add-in?
Enjoy!
Join the conversationAdd Comment
Very good points, and nice informative piece – but I think MS is really missing it on this one. I have many clients who ARE NOT programmers, but are very talented at VBA. They are CPA’s and Actuaries – bean counters if you like – and they are truly scared and sweating this change over. They DONT want to take a few years to learn and stuggle through Visual Studio, and VSTO to them is simply learning VB.NET or C# and they simply dont have the time to "operate two careers" (as one client put it). So far, all our clients in total are saying things like "if we have to stay with Office 2003 forever, we will" – many feel betrayed!
I like VSTO, understand the technology and also understand that things must "move on" – but I think MS needs to do a great deal more than it is doing because out in the field, VSTO is seen as a nightmare and dead-end path to those who know VBA through many years of working it, but see VS/VSTO as way too much to take on and still do their normal jobs.
I hope MS has some plan of some kind to deal with this or Office is going to suffer greatly – at least in the financial and actuarial markets. As it is no one wants to touch Office 2007, and when its successor comes out, I fear that will only be avoided all the more.
MS really needs to get in touch with its marketplace and provide some sort of transition tool, or strategy that is really do-able. Just forcing this on the market is not going to cut it.
I hope all VBA developers could enjoy this post as much as I did Beth. I give it 5/5.
Hi AndyF,
I totally agree, and that’s why the article focuses on interop. As a pro developer you can provide folks who are writing VBA code access to methods in the .NET framework.
As Office and .NET evolve VBA interop is a must-have for migration even if we provide a new environment because there will be so many apps already built that need to be extended. And yes, there are folks here looking into this problem.
Not sure what you mean about Office 2007, it still supports VBA as does VSTO. And we’re not trying to "force" you, we’re trying to provide businesses and programmers with options, VSTO being one of them.
Thanks for the feedback, much appreciated!
Cheers,
-Beth
Hi Beth,
Great article. Is it possible to call a method in a VSTO Application Level Add-In from VBA? This article and all the others I have seen apply only to Document level add-ins.
cheers
Peter
Hi Peter,
Here’s a walkthrough on how to expose methods in a VSTO Add-In to COM/Office apps:
msdn.microsoft.com/…/bb608614.aspx
HTH,
-B
What do I need to use VSTO and do what you did in this article? I have Office 2010 Plus. Do I also need Visual Studio (Professional)to use VSTO? I know it used to be required, but is it still? Is Visual Studio Express required?
Can you specify what I need to purchase, and what can be downloaded free of charge? This is one of the greatest obsticles I face, not knowing if I have all the required elements to be able to try something new, being so new to programming all around; it's all still pretty much a big cloud of mysteriously technical sounding words.
Also, there's a lot of training available now for InfoPath, something I've been searching for forever! But nearly everywhere I see "InfoPath," I also see "SharePoint Server." :( The company I work for does not have SharePoint and will not be getting it. Is there someplace I can go to see what I can and can't do with InfoPath without SharePoint Server?
Thank you,
Kristen
Hi Kristen,
You need Visual Studio Professional or higher to get the Visual Studio Tools for Office (VSTO). You can purchase that here:…/buy
I'm not familiar with the capabilities of InfoPath but you can check their blog here:
blogs.msdn.com/…/infopath
You can also ask knowlegeable folks in the InfoPath forums on this site:
HTH,
-Beth
This is a great article. Good and important points to know.
As far as this allows you to call VBA functions is there ANY WAY to do the opposite in a document-level add-in? By opposite I mean can I expose a C# class in a document-level add-in and create an instance of it in VBA?
I am well aware that you can expose classes in an application-level add-in but is it somehow possible in a document-level one?
If interested please see my question on Stack Overflow @ stackoverflow.com/…/how-to-expose-a-c-sharp-class-to-a-vba-module-in-a-document-level-add-in
Thanks for this valuable article. I just have one question before I use VSPO. Can I run ms access macro without having ms Office installed ?
if we want to create custom tab in outlook 2007 is it possible with the help of vba.If its possible please show me a way to find it.
I followed this method for Word templates and it works fine… as long as the Word document was originally created with this template. However, if you create a Word document with an old template, and then add this above method in a new template, the VSTO functions cannot be fired because the Startup method is not triggered when you add a Template to an already existing file. I thought saving the document and re-opening would suffice but users claim that is failing too. | https://blogs.msdn.microsoft.com/bethmassi/2009/11/05/interop-between-vba-and-visual-studio-office-solutions-vsto/ | CC-MAIN-2016-30 | refinedweb | 3,211 | 61.46 |
Chapter 13 Integrating Datasets
13 correction of these effects is critical for eliminating batch-to-batch variation, allowing data across multiple batches to be combined for common.
13.2 Setting up the data
To demonstrate, we will use two separate 10X Genomics PBMC datasets generated in two different batches. Each dataset was obtained from the TENxPBMCData package and separately subjected to basic processing steps. Separate processing prior to the batch correction step is more convenient, scalable and (on occasion) more reliable. For example, outlier-based QC on the cells is more effective when performed within a batch (Section 6.3.2.3). The same can also be said for trend fitting when modelling the mean-variance relationship (Section 8.2.4.1).
#--- ## altExpNames(0):
To prepare for the batch correction:
We subset all batches to the common “universe” of features. In this case, it is straightforward as both batches use Ensembl gene annotation.
## [1] 31232
We rescale each batch to adjust for differences in sequencing depth between batches. The
multiBatchNorm()function recomputes log-normalized expression values after adjusting the size factors for systematic differences in coverage between
SingleCellExperimentobjects. (Size factors only remove biases between cells within a single batch.) This improves the quality of the correction by removing one aspect of the technical differences between batches. Section 8 genes than are used in a single dataset analysis, to ensure that markers are retained for any dataset-specific subpopulations that might be present. For a top \(X\) selection, this means using a larger \(X\) (say, ~5000), or in this case, we simply take all genes above the trend. That said, many of the signal-to-noise considerations described in Section 8.3 still apply here, so some experimentation may be necessary for best results.
Alternatively, a more forceful approach to feature selection can be used based on marker genes from within-batch comparisons; this is discussed in more detail in Section 13.7.
13.3 Diagnosing batch effects
Before we actually perform any correction, it is worth examining whether there is any batch effect in this dataset.
We combine the two
SingleCellExperiments and perform a PCA on the log-expression values for all genes with positive (average) biological components.
In this example, our datasets are file-backed and so we instruct
runPCA() to use randomized PCA for greater efficiency -
see Section 23.2.2 for more details - though the default IRLBA will suffice for more common in-memory representations.
# Synchronizing the metadata for cbind()ing.
We can also visualize the corrected coordinates using a \(t\)-SNE plot (Figure 13.1). The strong separation between cells from different batches is consistent with the clustering results.
set.seed(1111001) uncorrected <- runTSNE(uncorrected, dimred="PCA") plotTSNE(uncorrected, colour_by="batch")
Figure 13.
13.4 Linear regression): ## altExpNames(0):
After clustering, we observe that most clusters consist of mixtures of cells from the two replicate batches, consistent with the removal of the batch effect. This conclusion is supported by the apparent mixing of cells from different batches in Figure 13.2. 13.2: \(t\)-SNE plot of the PBMC datasets after correction with
rescaleBatches(). Each point represents a cell and is colored according to the batch of origin. 13.3: \(t\)-SNE plot of the PBMC datasets after correction with
regressBatches(). Each point represents a cell and is colored according to the batch of origin.
13.5 Performing
## class: SingleCellExperiment ## dim: 13431 6791 ## metadata(2): merge.info pca.info ## assays(1): reconstructed ## rownames(13431): ENSG00000239945 ENSG00000228463 ... ENSG00000198695 ## ENSG00000198727 ## rowData names(1): rotation ## colnames: NULL ## colData names(1): batch ## reducedDimNames(1): corrected ## altExpNames(0): (Section 13.8).
##
See Chapter 34 for an example of a more complex
fastMNN() merge involving several human pancreas datasets generated by different authors on different patients with different technologies.
13.6 Correction diagnostics
13.6.1 Mixing between batches
It is possible to quantify the degree of mixing across batches by testing each cluster for imbalances in the contribution from each batch (Büttner et al. 2019).
This is done by applying Pearson’s chi-squared test to each row of
tab.mnn where the expected proportions under the null hypothesis proportional to the total number of cells per batch.
Low \(p\)-values indicate that there are significant imbalances
In practice, this strategy is most suited to technical replicates with identical population composition; it is usually too stringent for batches with more biological variation, where proportions can genuinely vary even in the absence of any batch effect.
chi.prop <- colSums(tab.mnn)/sum(tab.mnn) chi.results <- apply(tab.mnn, 1, FUN=chisq.test, p=chi.prop) p.values <- vapply(chi.results, "[[", i="p.value", 0) p.values
## 1 2 3 4 5 6 7 8 ## 9.047e-02 3.093e-02 6.700e-03 2.627e-03 8.424e-20 2.775e-01 5.546e-05 2.274e-11 ## 9 10 11 12 13 14 15 16 ## 2.136e-04 5.480e-05 4.019e-03 2.972e-03 1.538e-12 3.936e-05 2.197e-04 7.172e-01
We favor a more qualitative approach whereby we compute the variation in the log-abundances to rank the clusters with the greatest variability in their proportional abundances across batches. We can then focus on batch-specific clusters that may be indicative of incomplete batch correction. Obviously, though, this diagnostic is subject to interpretation as the same outcome can be caused by batch-specific populations; some prior knowledge about the biological context is necessary to distinguish between these two possibilities. For the PBMC dataset, none of the most variable clusters are overtly batch-specific, consistent with the fact that our batches are effectively replicates.
# Avoid minor difficulties with the 'table' class. tab.mnn <- unclass(tab.mnn) # Using a large pseudo.count to avoid unnecessarily # large variances when the counts are low. norm <- normalizeCounts(tab.mnn, pseudo.count=10) # Ranking clusters by the largest variances. rv <- rowVars(norm) DataFrame(Batch=tab.mnn, var=rv)[order(rv, decreasing=TRUE),]
## DataFrame with 16 rows and 3 columns ## Batch.1 Batch.2 var ## <integer> <integer> <numeric> ## 15 4 36 0.934778 ## 13 144 93 0.728465 ## 9 11 56 0.707757 ## 8 162 118 0.563419 ## 4 12 4 0.452565 ## ... ... ... ... ## 6 17 19 0.05689945 ## 10 547 1083 0.04527468 ## 2 289 542 0.02443988 ## 1 337 606 0.01318296 ## 16 4 8 0.00689661
We can also visualize the corrected coordinates using a \(t\)-SNE plot (Figure 13 13.4: \(t\)-SNE plot of the PBMC datasets after MNN correction. Each point is a cell that is colored according to its batch of origin.
For
fastMNN(), one useful diagnostic is the proportion of variance within each batch that is lost during MNN correction.
Specifically, this refers to the within-batch variance that is removed during orthogonalization with respect to the average correction vector at each merge step.
This is returned via the
lost.var field in the metadata of
mnn.out, which contains a matrix of the variance lost in each batch (column) at each merge step (row).
## [,1] [,2] ## [1,] 0.006617 0.003315
Large proportions of lost variance (>10%) suggest that correction is removing genuine biological heterogeneity. This would occur due to violations of the assumption of orthogonality between the batch effect and the biological subspace (Haghverdi et al. 2018). In this case, the proportion of lost variance is small, indicating that non-orthogonality is not a major concern.
13.6.2 Preserving biological heterogeneity
Another useful diagnostic check is to compare the clustering within each batch to the clustering of the merged data. Accurate data integration should preserve variance within each batch as there should be nothing to remove between cells in the same batch. This check complements the previously mentioned diagnostics that only focus on the removal of differences between batches. Specifically, it protects us against cases where the correction method simply aggregates all cells together, which would achieve perfect mixing but also discard the biological heterogeneity of interest.
Ideally, we should see a many-to-1 mapping where the across-batch clustering is nested inside the within-batch clusterings. This indicates that any within-batch structure was preserved after correction while acknowledging that greater resolution is possible with more cells. In practice, more discrepancies can be expected even when the correction is perfect, due to the existence of closely related clusters that were arbitrarily separated in the within-batch clustering. As a general rule, we can be satisfied with the correction if the vast majority of entries in Figure 13.5 are zero, though this may depend on whether specific clusters of interest are gained or lost.
library(pheatmap) # For the first batch (adding +10 for a smoother color transition # from zero to non-zero counts for any given matrix entry). tab <- table(paste("after", clusters.mnn[rescaled$batch==1]), paste("before", colLabels(pbmc3k))) heat3k <- pheatmap(log10(tab+10), cluster_row=FALSE, cluster_col=FALSE, main="PBMC 3K comparison", silent=TRUE) # For the second batch. tab <- table(paste("after", clusters.mnn[rescaled$batch==2]), paste("before", colLabels(pbmc4k))) heat4k <- pheatmap(log10(tab+10), cluster_row=FALSE, cluster_col=FALSE, main="PBMC 4K comparison", silent=TRUE) gridExtra::grid.arrange(heat3k[[4]], heat4k[[4]])
Figure 13.5: Comparison between the within-batch clusters and the across-batch clusters obtained after MNN correction. One heatmap is generated for each of the PBMC 3K and 4K datasets, where each entry is colored according to the number of cells with each pair of labels (before and after correction).
We use the adjusted Rand index (Section 10.6.2) to quantify the agreement between the clusterings before and after batch correction. Recall that larger indices are more desirable as this indicates that within-batch heterogeneity is preserved, though this must be balanced against the ability of each method to actually perform batch correction.
library(bluster) ri3k <- pairwiseRand(clusters.mnn[rescaled$batch==1], colLabels(pbmc3k), mode="index") ri3k
## [1] 0.7361
## [1] 0.8301
We can also break down the ARI into per-cluster ratios for more detailed diagnostics (Figure 13.6). For example, we could see low ratios off the diagonal if distinct clusters in the within-batch clustering were incorrectly aggregated in the merged clustering. Conversely, we might see low ratios on the diagonal if the correction inflated or introduced spurious heterogeneity inside a within-batch cluster.
# For the first batch. tab <- pairwiseRand(colLabels(pbmc3k), clusters.mnn[rescaled$batch==1]) heat3k <- pheatmap(tab, cluster_row=FALSE, cluster_col=FALSE, col=rev(viridis::magma(100)), main="PBMC 3K probabilities", silent=TRUE) # For the second batch. tab <- pairwiseRand(colLabels(pbmc4k), clusters.mnn[rescaled$batch==2]) heat4k <- pheatmap(tab, cluster_row=FALSE, cluster_col=FALSE, col=rev(viridis::magma(100)), main="PBMC 4K probabilities", silent=TRUE) gridExtra::grid.arrange(heat3k[[4]], heat4k[[4]])
Figure 13.6: ARI-derived ratios for the within-batch clusters after comparison to the merged clusters obtained after MNN correction. One heatmap is generated for each of the PBMC 3K and 4K datasets.
13.7 Encouraging consistency with marker genes
In some situations, we will already have performed within-batch analyses to characterize salient aspects of population heterogeneity.
This is not uncommon when merging datasets from different sources where each dataset has already been analyzed, annotated and interpreted separately.
It is subsequently desirable for the integration procedure to retain these “known interesting” aspects of each dataset in the merged dataset.
We can encourage this outcome by using the marker genes within each dataset as our selected feature set for
fastMNN() and related methods.
This focuses on the relevant heterogeneity and represents a semi-supervised approach that is a natural extension of the strategy described in Section 8.4.
To illustrate, we apply this strategy to our PBMC datasets.
We identify the top marker genes from pairwise Wilcoxon ranked sum tests between every pair of clusters within each batch, analogous to the method used by SingleR (Chapter 12).
In this case, we use the top 10 marker genes but any value can be used depending on the acceptable trade-off between signal and noise (and speed).
We then take the union across all comparisons in all batches and use that in place of our HVG set in
fastMNN().
# Recall that groups for marker detection # are automatically defined from 'colLabels()'. stats3 <- pairwiseWilcox(pbmc3k, direction="up") markers3 <- getTopMarkers(stats3[[1]], stats3[[2]], n=10) stats4 <- pairwiseWilcox(pbmc4k, direction="up") markers4 <- getTopMarkers(stats4[[1]], stats4[[2]], n=10) marker.set <- unique(unlist(c(unlist(markers3), unlist(markers4)))) length(marker.set) # getting the total number of genes selected in this manner.
## [1] 314
set.seed(1000110) mnn.out2 <- fastMNN(pbmc3k, pbmc4k, subset.row=marker.set, BSPARAM=BiocSingular::RandomParam(deferred=TRUE))
A quick inspection of Figure 13.7 indicates that the original within-batch structure is indeed preserved in the corrected data. This highlights the utility of a marker-based feature set for integrating datasets that have already been characterized separately in a manner that preserves existing interpretations of each dataset. We note that some within-batch clusters have merged, most likely due to the lack of robust separation in the first place, though this may also be treated as a diagnostic on the appropriateness of the integration depending on the context.
mnn.out2 <- runTSNE(mnn.out2, dimred="corrected") gridExtra::grid.arrange( plotTSNE(mnn.out2[,mnn.out2$batch==1], colour_by=I(colLabels(pbmc3k))), plotTSNE(mnn.out2[,mnn.out2$batch==2], colour_by=I(colLabels(pbmc4k))), ncol=2 )
Figure 13.7: \(t\)-SNE plots of the merged PBMC datasets, where the merge was performed using only marker genes identified within each batch. Each point represents a cell that is colored by the assigned cluster from the within-batch analysis for the 3K (left) and 4K dataset (right).
13.8 Using the corrected values
The greatest value of batch correction lies in facilitating cell-based analysis of population heterogeneity in a consistent manner across batches. Cluster 1 in batch A is the same as cluster 1 in batch B when the clustering is performed on the merged data. There is no need to identify mappings between separate clusterings, which might not even be possible when the clusters are not well-separated. The burden of interpretation is consolidated by generating a single set of clusters for all batches, rather than requiring separate examination of each batch’s clusters. Another benefit is that the available number of cells is increased when all batches are combined, which allows for greater resolution of population structure in downstream analyses. We previously demonstrated the application of clustering methods to the batch-corrected data, but the same principles apply for other analyses like trajectory reconstruction.
At this point, it is also tempting to use the corrected expression values for gene-based analyses like DE-based marker gene detection.
This is not generally recommended as an arbitrary correction algorithm is not obliged to preserve the magnitude (or even direction) of differences in per-gene expression when attempting to align multiple batches.
For example, cosine normalization in
fastMNN() shrinks the magnitude of the expression values so that the computed log-fold changes have no obvious interpretation.
Of greater concern is the possibility that the correction introduces artificial agreement across batches.
To illustrate:
- Consider a dataset (first batch) with two cell types, \(A\) and \(B\). Consider a second batch with the same cell types, denoted as \(A'\) and \(B'\). Assume that, for some reason, gene \(X\) is expressed in \(A\) but not in \(A'\), \(B\) or \(B'\) - possibly due to some difference in how the cells were treated, or maybe due to a donor effect.
- We then merge the batches together based on the shared cell types. This yields a result where \(A\) and \(A'\) cells are intermingled and the difference due to \(X\) is eliminated. One can debate whether this should be the case, but in general, it is necessary for batch correction methods to smooth over small biological differences (as discussed in Section 13.3).
- Now, if we corrected the second batch to the first, we must have coerced the expression values of \(X\) in \(A'\) to non-zero values to align with those of \(A\), while leaving the expression of \(X\) in \(B'\) and \(B\) at zero. Thus, we have artificially introduced DE between \(A'\) and \(B'\) for \(X\) in the second batch to align with the DE between \(A\) and \(B\) in the first batch. (The converse is also possible where DE in the first batch is artificially removed to align with the second batch, depending on the order of merges.)
- The artificial DE has implications for the identification of the cell types and interpretation of the results. We would be misled into believing that both \(A\) and \(A'\) are \(X\)-positive, when in fact this is only true for \(A\). At best, this is only a minor error - after all, we do actually have \(X\)-positive cells of that overall type, we simply do not see that \(A'\) is \(X\)-negative. At worst, this can compromise the conclusions, e.g., if the first batch was drug treated and the second batch was a control, we might mistakenly think that a \(X\)-positive population exists in the latter and conclude that our drug has no effect.
Rather, it is preferable to perform DE analyses using the uncorrected expression values with blocking on the batch, as discussed in Section 11.4. This strategy is based on the expectation that any genuine DE between clusters should still be present in a within-batch comparison where batch effects are absent. It penalizes genes that exhibit inconsistent DE across batches, thus protecting against misleading conclusions when a population in one batch is aligned to a similar-but-not-identical population in another batch. We demonstrate this approach below using a blocked \(t\)-test to detect markers in the PBMC dataset, where the presence of the same pattern across clusters within each batch (Figure 13.8) is reassuring. If integration is performed across multiple conditions, it is even more important to use the uncorrected expression values for downstream analyses - see Section 14.6.2 for a discussion.
m.out <- findMarkers(uncorrected, clusters.mnn, block=uncorrected$batch, direction="up", lfc=1, row.data=rowData(uncorrected)[,3,drop=FALSE]) # A (probably activated?) T cell subtype of some sort: demo <- m.out[["10"]] as.data.frame(demo[1:20,c("Symbol", "Top", "p.value", "FDR")])
## Symbol Top p.value FDR ## ENSG00000177954 RPS27 1 3.399e-168 1.061e-163 ## ENSG00000227507 LTB 1 1.238e-157 1.934e-153 ## ENSG00000167286 CD3D 1 9.136e-89 4.076e-85 ## ENSG00000111716 LDHB 1 8.699e-44 1.811e-40 ## ENSG00000008517 IL32 1 4.880e-31 6.928e-28 ## ENSG00000172809 RPL38 1 8.727e-143 6.814e-139 ## ENSG00000171223 JUNB 1 8.762e-72 2.737e-68 ## ENSG00000071082 RPL31 2 8.612e-78 2.989e-74 ## ENSG00000121966 CXCR4 2 2.370e-07 1.322e-04 ## ENSG00000251562 MALAT1 2 3.618e-33 5.650e-30 ## ENSG00000133639 BTG1 2 6.847e-12 4.550e-09 ## ENSG00000170345 FOS 2 2.738e-46 6.108e-43 ## ENSG00000129824 RPS4Y1 2 1.075e-108 6.713e-105 ## ENSG00000177606 JUN 3 1.039e-37 1.910e-34 ## ENSG00000112306 RPS12 3 1.656e-33 2.722e-30 ## ENSG00000110700 RPS13 3 7.600e-18 7.657e-15 ## ENSG00000198851 CD3E 3 1.058e-36 1.836e-33 ## ENSG00000213741 RPS29 3 1.494e-148 1.555e-144 ## ENSG00000116251 RPL22 4 3.992e-25 4.796e-22 ## ENSG00000144713 RPL32 4 1.224e-32 1.820e-29
plotExpression(uncorrected, x=I(factor(clusters.mnn)), features="ENSG00000177954", colour_by="batch") + facet_wrap(~colour_by)
Figure 13.8: Distributions of RPSA uncorrected log-expression values within each cluster in each batch of the merged PBMC dataset.
We suggest limiting the use of per-gene corrected values to visualization, e.g., when coloring points on a \(t\)-SNE plot by per-cell expression. This can be more aesthetically pleasing than uncorrected expression values that may contain large shifts on the colour scale between cells in different batches. Use of the corrected values in any quantitative procedure should be treated with caution, and should be backed up by similar results from an analysis on the uncorrected pheatmap_1.0.12 [3] scater_1.18.3 ggplot2_3.3.2 [5] scran_1.18.1 batchelor_1.6.2 [7] SingleCellExperiment_1.12.0 SummarizedExperiment_1.20.0 [9] Biobase_2.50.0 GenomicRanges_1.42.0 [11] GenomeInfoDb_1.26.1 HDF5Array_1.18.0 [13] rhdf5_2.34.0 DelayedArray_0.16.0 [15] IRanges_2.24.0 S4Vectors_0.28.0 [17] MatrixGenerics_1.2.0 matrixStats_0.57.0 [19] BiocGenerics_0.36.0 Matrix_1.2-18 [21] BiocStyle_2.18.1 rebook_1.0.0 loaded via a namespace (and not attached): [1] bitops_1.0-6 RColorBrewer_1.1-2 [3] tools_4.0.3 R6_2.5.0 [5] irlba_2.3.3 ResidualMatrix_1.0.0 [7] vipor_0.4.5 colorspace_2.0-0 [9] rhdf5filters_1.2.0 withr_2.3.0 [11] tidyselect_1.1.0 gridExtra_2.3 [13] processx_3.4.5 compiler_4.0.3 [15] graph_1.68.0 BiocNeighbors_1.8.2 [17] labeling_0.4.2 bookdown_0.21 [19] scales_1.1.1 callr_3.5.1 [21] stringr_1.4.0 digest_0.6.27 [23] rmarkdown_2.5 XVector_0.30.0 [25] pkgconfig_2.0.3 htmltools_0.5.0 [27] sparseMatrixStats_1.2.0 highr_0.8 [29] limma_3.46.0 rlang_0.4.9 [31] DelayedMatrixStats_1.12.1 farver_2.0.3 [33] generics_0.1.0 BiocParallel_1.24.1 [35] dplyr_1.0.2 RCurl_1.98-1.2 [37] magrittr_2.0.1 BiocSingular_1.6.0 [39] GenomeInfoDbData_1.2.4 scuttle_1.0.3 [41] Rcpp_1.0.5 ggbeeswarm_0.6.0 [43] munsell_0.5.0 Rhdf5lib_1.12.0 [45] viridis_0.5.1 lifecycle_0.2.0 [47] stringi_1.5.3 yaml_2.2.1 [49] edgeR_3.32.0 zlibbioc_1.36.0 [51] Rtsne_0.15 grid_4.0.3 [53] dqrng_0.2.1 crayon_1.3.4 [55] lattice_0.20-41 cowplot_1.1.0 [57] beachmat_2.6.2 locfit_1.5-9.4 [59] CodeDepends_0.6.5 knitr_1.30 [61] ps_1.5.0 pillar_1.4.7 [63] igraph_1.2.6 codetools_0.2-18 [65] XML_3.99-0.5 glue_1.4.2 [67] evaluate_0.14 BiocManager_1.30.10 [69] vctrs_0.3.5 gtable_0.3.0 [71] purrr_0.3.4 xfun_0.19 [73] rsvd_1.0.3 viridisLite_0.3.0 [75] tibble_3.0.4 beeswarm_0.2.3 [77] statmod_1.4.35 ellipsis_0.3.1
Bibliography
Butler, A., P. Hoffman, P. Smibert, E. Papalexi, and R. Satija. 2018. “Integrating single-cell transcriptomic data across different conditions, technologies, and species.” Nat. Biotechnol. 36 (5): 411–20.
Büttner, Maren, Zhichao Miao, F Alexander Wolf, Sarah A Teichmann, and Fabian J Theis. 2019. “A Test Metric for Assessing Single-Cell Rna-Seq Batch Correction.” Nature Methods 16 (1): 43–49.. | https://bioconductor.org/books/release/OSCA/integrating-datasets.html | CC-MAIN-2021-17 | refinedweb | 3,758 | 50.63 |
0
hi guys.. i'm doing this program.. haven't finished it yet..
i'm not getting any errors but the program is not running correctly.. I dunnu y,, i did everything i had to and it says 1 succeeded with no warnings !
here is my code..
#include <iostream> #include <fstream> #include <cstdlib> using namespace std; int main () { int num; do { cout<<endl <<"*****************************************"<<endl <<"Welcome to J-Otaku-Air travel agency."<<endl <<"Please choose a number from the following menu, then press enter to " <<"move to the desired page."<<endl<<endl <<"1. About J-Otaku-Air"<<endl <<"2. Reservations"<<endl <<"3. Ticket sales"<<endl <<"4. Membership and frequent flier miles"<<endl <<"5. Ticket sales"<<endl <<"6. Special offers"<<endl <<"7. Frequently asked questions"<<endl <<"8. Contact us"<<endl <<"* Press 0 to exit"<<endl <<"*****************************************"<<endl<<endl; cin>>num; cout<<endl; num++ }while(num!=0); switch (num) { case (0): cout << "End of program"<<endl; case (1): cout<<"J-Otaku-Air was founded on July 1990. Founded by two individuals in an"<<endl <<"extraordinary partnership."<<endl <<"Wareef Al-Omair and Fadia Banafe built this company with the goal of helping"<<endl <<"accommodate the curious nature of people and their need for exploration. "<<endl <<"Throuh out the years, J-Otaku-Air has been thriving to expand its services and"<<endl <<"offer the best for their clients. And along their journey their hard work has"<<endl <<"been rewarded by many establishments."<<endl <<"In hopes of seeing you in one of our flights."<<endl<<endl <<"Sincerely," <<endl<<endl <<" J-Otaku-Air"<<endl; break; default: cout<<"Invalid number. Please choose again."; break; } return 0; } | https://www.daniweb.com/programming/software-development/threads/194423/plz-someone-tell-me-what-is-wrong-with-my-program | CC-MAIN-2017-09 | refinedweb | 267 | 70.5 |
A tool to generate stringed instrument visual aids
Project description
scales
A tool to generate stringed instrument visual aids
Installation
Install using:
pip install scales.py
Then import it like so:
from scales import Scales
Usage
Scales objects take at least the
scale parameter, which should be a list of notes (the first note is treated as the root). They can be drawn using the
draw function, where
start and
stop are the range of frets to show (defaults to showing 15 frets if not specified).
For example:
a_mixo = ['A', 'B', 'C#', 'D', 'E', 'F#', 'G'] six_string = Scales(title='A Mixolydian', scale=a_mixo) six_string.draw(start=11, stop=15)
By default, it will assume a 6 string guitar in standard tuning, but you can specify other tunings like:
c_major = ['C', 'D', 'E', 'F', 'G', 'A', 'B'] ukulele = Scales(title='C Major on Ukulele', strings=['G', 'C', 'E', 'A'], scale=c_major) ukulele.draw()
Scales can be drawn without making it an object:
g_chord = ['G', 'B', 'D'] Scales(title='Open G', scale=g_chord).draw(stop=3)
Other helpful uses
Scales can be used to generate blank visuals to print out. Heres a blank 6 string fretboard:
Scales([]).draw()
Or a blank open chord chart:
Scales([]).draw(stop=3)
Dependencies
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/scales.py/ | CC-MAIN-2020-10 | refinedweb | 234 | 69.01 |
I saw the light
Amount of silverlight polling duplex clients is defined by MaxConcurrentSessions throttling property. Default value is 10.
To increase it you should programmatically add ServiceThrottlingBehavior.
Here is some code that shows how it could be done:
1: public class YourDuplexServiceFactory : ServiceHostFactoryBase
2: {
3: public override ServiceHostBase CreateServiceHost(string constructorString,
4: Uri[] baseAddresses)
5: {
6: return new PollingDuplexSimplexServiceHost(baseAddresses);
7: }
8: }
9:
10: internal class PollingDuplexSimplexServiceHost : ServiceHost
11: {
12: public PollingDuplexSimplexServiceHost(params Uri[] addresses)
13: {
14: InitializeDescription(typeof(YourDuplexService), new UriSchemeKeyedCollection(addresses));
15: Description.Behaviors.Add(new ServiceMetadataBehavior());
16:
17: var throttle = Description.Behaviors.Find<ServiceThrottlingBehavior>();
18: if (throttle == null)
19: {
20: throttle = new ServiceThrottlingBehavior
21: {
22: MaxConcurrentCalls = 12,
23: MaxConcurrentSessions = 34,
24: MaxConcurrentInstances = 56
25: };
26: Description.Behaviors.Add(throttle);
27: }
28: }
29:
30: protected override void InitializeRuntime()
31: {
32: // Add an endpoint for the given service contract.
33: AddServiceEndpoint(
34: typeof(IYourDuplexService),
35: new CustomBinding(
36: new PollingDuplexBindingElement
37: {
38: InactivityTimeout = TimeSpan.FromSeconds(3600)
39: },
40: new BinaryMessageEncodingBindingElement(),
41: new HttpTransportBindingElement()),
42: "");
43:
44: // Add a metadata endpoint.
45: AddServiceEndpoint(
46: typeof (IMetadataExchange),
47: MetadataExchangeBindings.CreateMexHttpBinding(),
48: "mex");
49:
50: base.InitializeRuntime();
51: }
52: }
PS: I don't like how duplex services work. For example without any configuration 11 user of your application based on duplex services will get error. Surely you can increase MaxConcurrentSession with such approach, but soon u will also reach the limit. If you have got same experience please write about it in comments!!!!
Many Thanks, I hit the default limit of about 8 , and wondered why concurrent sessions were being throttled. This has helped me get around that.
Yeah, this was a pretty dumb default on Microsoft's part. "Let's protect against DOS attacks by making sure people can't access the service! Yeah, that's the ticket!"
i have set this code in my application.I have develop a silverlight 3.0 chat application using polling duplex but at a time only 10 chat windows open on a single clinet. means every client opens only 10 chat windows but iwant to its unlimited how it is possible in polling duplex wcf service.
#amritpal
Yes, I also am interested to know if ulimited polling is possible in a duplex wcf service? And also, if sockets are more of a reliable solution? Are sockets limited to a number of connections?
Generally I don't read article on blogs, but I wish to say that this write-up very forced me to take a look at and do so! Your writing taste has been amazed me. Thanks, quite nice article. zip hair removal wax
When someone writes an post he/she retains the idea of a user in his/her mind that how a
user can know it. Therefore that's why this post is amazing. Thanks!
Having read this I thought it was rather informative.
I appreciate you finding the time and effort to put this information together.
I once again find myself spending way too much time both reading and
commenting. But so what, it was still worth it! laser hair
removal buffalo ny
For illustration, the commencement knowledge tip on H-1 Bs is that of notification,
a own 1% of COP's undischarged shares. In that location are some parallels betwixt this level and Tom Wolfe's 1987 fancied bestseller, "The is it because of Ballmer, or is it because people are implicated just about a post-PC era?"
A recent study showed the proenhance Extract group shed 28 pounds in 10 weeks compared to the same website to write these comments but they have nothing in
common.
Nevertheless, you do not fall into this kind of tanning cream is
dihydroxyacetone or DHA. Santa" I asked myself these questions after losing thirty pounds at around week nine of myMedi-hair growth light program. Do not smoke: Smoking causes a low sperm count temporarily. I go to a buffet I will enjoy it significantly. So why wouldn't you check it out! It is essential for success. Worse, young doctors may never know about the hair growth light connection with enzymes. 0 Ice Cream Sandwich-powered smartphone it showcased in October 2012.
You might require cellulite reduction to follow this once every other day approximately 20 minutes on the back burner.
Stomach pain and cellulite reduction bloating. Only
if it reaches the end of summer sucks, but cheer yourself up with
an idea to make him well'. That means that you consume less food.
Sl2s2D Thanks for the blog.Thanks Again. | http://weblogs.asp.net/alexeyzakharov/archive/2009/04/17/how-to-increase-amount-of-silverlight-duplex-clients.aspx | CC-MAIN-2014-15 | refinedweb | 752 | 57.57 |
Hi all , i’m using the IR library by Ken Shirriff. I have it working for the most part but have had a problem. It took me some time to figure out why my code wasn’t working but solved it of a sort by trile and error.
It’s when i perform an if statement to test for a key press on the IR it will only seem to work if there is a delay of at least 1000 directly after the body of the if statement. I based it on a simple example and that did not seem to need this.
I have taken all but the working of what i’m talking about in my example, you can see the delay statement in there. I would be very grateful if some one could tell me were i’m going wrong or tell me why it seems to need it. It also seems that if i drop the delay below 100 it doesn’t work either.
The whole thing would work better if it didn’t do this. Thanks in advance
#include <IRremote.h> IRrecv irrecv(RECV_PIN); decode_results results; void setup() { Serial.begin(9600); irrecv.enableIRIn(); // Start the receiver } //////////////////Loop void loop() { Detect(); if(results.value==16755285) { //// run some code in hear } delay(1000); // Why does it seem to need this? is it somekind of debounce } //////////////////Functions void Detect(){ irrecv.decode(&results); irrecv.resume(); } | https://forum.arduino.cc/t/problems-with-ir-libraries/45643 | CC-MAIN-2022-21 | refinedweb | 236 | 81.12 |
Add ids to facilitate add-ons
RESOLVED FIXED
Status
()
▸
General
People
(Reporter: Blake Ross, Unassigned)
Tracking
Firefox Tracking Flags
(Not tracked)
Details
Attachments
(5 obsolete attachments)
Target Milestone: --- → Phoenix0.2
-> hyatt
Assignee: blaker → hyatt
Target Milestone: Phoenix0.2 → Phoenix0.3
Status: NEW → ASSIGNED
Target Milestone: Phoenix0.3 → Phoenix0.4
Not sure of the reaction to this comment, but what the heck: concerning the ids that are added - could it be possible to try to make sure that all the id's are unique across the app? As if there was an appwide namespace. The reason I even mention this is that it might make someone who is developing for phoenix to be able to uniquely locate any id across all the xul pages - perhaps for some automation or tool of some kind. This comment may be completely spurious, but since phoenix is something new, I thought it might be worth mentioning. If anything - once the id's are added - someone could index them so that when an extension developer wanted to know "how do I overlay over menu x" they'll be able to search via lxr or however and be able to know just from the id it it's the right location.
Target Milestone: Phoenix0.4 → Phoenix0.5
Summary: Add ids to menupopups to facilitate add-ons → Add ids to facilitate add-ons
Target Milestone: Phoenix0.5 → Phoenix0.6
Severity: normal → enhancement
The new "options" dialog xul is fairly lacking in ids right now. It will be essential (IMO) to add appropriate ids to this area so that some themes can be updated to work with Feb.+ builds as well as new themes, unless extending the dialog is to be discouraged in favor of individual extension prefs.
cd_cook wrote: > unless extending the > dialog is to be discouraged in favor of individual extension prefs. individual extension prefs is the way to go. -> reassigning it to me please request missing ID here.
Assignee: hyatt → chanial
Status: ASSIGNED → NEW
Target Milestone: Phoenix0.6 → ---
likely to be WONTFIXed : ) ids for menus : file, edit, view, go, 'tasks' and help [bookmarks already has an id] + OT add ' persist="hidden"' on all menubar menus + OT to help Chris Cook with bug 193486, separate the menu section from browser.xul, as navigator, which has globalOverlay and navigatorOverlau (maybe others too)
This probably applies to both mozilla and phoenix but could we get an ID for the window of the download progress dialog nsProgressDialog.xul In phoenix found at: \phoenix\chrome\toolkit\content\global <window xmlns: (or can this be referenced by class, and if so, is that better?)
I'd like to request a few IDs be added to make the Mac OS X menubar code work (/widget/src/mac/nsMenuBarX.cpp) In browser.xul, add: - id="menu_FileQuitSeparator" to <menuseparator/> just above the Quit menuitem - id="menu_preferences" to Prefs menuitem - id="aboutName" to About menuitem
How about ID tags for each grouping in Options. For example, Privacy has Disk Cache, but not Memory Cache settings. If an extension wanted to add in Mem Cache, it needs an ID to hook in.
Not so much an ID but related to add-ons anyway. Would it be possible to include a chunk of generic javascript somewhere that could be called upon to save preference changes involving extensions? This was formerly done by the fact that extensions would simply overlay into the existing preferences dialog. But now each author has to write their own, which seems somewhat wasteful and also makes converting Mozilla-only extensions a real chore. In fact I have yet to see an existing Mozilla extension that has been satisfactorily converted to the new Extensions system - TBE for example calls up chrome://communicator/content/pref/pref.xul to do it, which works but hardly seems "clean". Perhaps a new enhancement bug should be filed on this?
I'll second comment 5, regarding id tags for the individual menus. The only way currently to reference a menu is by the label tag (eg menu[label="File"] in CSS,) which is not consistent across different locales.
I'd like ids on the "Frame" menupopup, and on the menuitems it contains. in browser.xul: <menu id="frame" label="&thisFrameMenu.label;" accesskey="&thisFrameMenu.accesskey;"> <menupopup> <--- here and its children Right now it doesn't look like it's possible to overlay the Frame menu.
I'd like to see as many items as possible given ids. I have an extension that allows you to pick and choose what menu items you want to see. I will have to go in and assign an id to each item in the code if these aren't added. If they are added then I could do it in an overlay.
*** Bug 207518 has been marked as a duplicate of this bug. ***
Target Milestone: --- → Firebird1.0
I am requesting an id ("key_textZoomReset") for the following keyset key in browser/content/browser.xul: <key key="&textZoomResetCmd.commandkey;" oncommand="ZoomManager.prototype.getInstance().reset();" modifiers="accel"/> I override this key in my TextZoom extension, which will be around forever if the devs continue to support the belief that text-zooming should not be a permanently configurable preference.
QA Contact: asa
I'd like ID's for all the menuitems, which would make it alot easier to add Cutemenu style icons to a skin that I'm distributing. If not all, then at least the following: File > New Window, New Tab, Close Tab, Save Page As, Send Page, Print, Exit Edit > Undo, Redo, Cut, Copy, Paste, Delete View > Reload, Increase Text Size, Decrease Text Size, Page Source, Full Screen Go > Home, History Bookmarks > Add to Bookmarks, Manage Bookmarks Tools > Downloads, JavaScript Console, Page Info, Options It may seem like alot, but it would greatly enhance the theme makers possibilities... (Or so I think anyway.)
i would like to see some ids added in bookmarksPoperties.xul so that i can add some favicon options in the "Info" tab... preferably for the <rows> tag that contains the <row>'s for name, location, shortcutUrl, and description...
I'd like to add a menuitem to "Bookmarks" Menu (id="bookmarks-menu"). Please give ids to its child menupopup element and menuseparator. menupopup menuseparator Thanks.
Created attachment 156526 [details] [diff] [review] Patch for trunk : Add ids for menus, menupopups, menuseparators Partially addressing comment 5, comment 7, comment 10 and comment 17 Also adds bookmark URI mouseover
Created attachment 157486 [details] [diff] [review] Patch for trunk against r 1.35 of browser/base/content/browser-menubar.inc checkin for bug 256862 (for trunk at present) fixed part 1 of comment 17
Attachment #156526 - Attachment is obsolete: true
Actually for branch as well. [ Persons Cc'ed, since bug 256862 got fixed by actions of those I'm Cc'ing ]
Created attachment 157488 [details] [diff] [review] Patch for trunk against r 1.35 of browser/base/content/browser-menubar.inc w/o mouseover changes on bookmarks menu
Created attachment 157489 [details] [diff] [review] equivalent of attachment 157488 [details] [diff] [review] for branch
Created attachment 186057 [details] [diff] [review] revived patch based on attachment 157488 [details] [diff] [review] for trunk with ids for menuseparators
Comment on attachment 157489 [details] [diff] [review] equivalent of attachment 157488 [details] [diff] [review] for branch (clearing review request.. this didn't make it into 1.0, but should make it into 1.1)
Assignee: p_ch → nobody
QA Contact: general
Comment on attachment 186057 [details] [diff] [review] revived patch based on attachment 157488 [details] [diff] [review] for trunk with ids for menuseparators At least part of this patch is rotten, IDs were added as part of bug 167391.
Attachment #186057 - Attachment is obsolete: true
Hardware: PC → All
Target Milestone: Firefox1.0 → ---
We've fixed a bunch of stuff, please file bugs for places that still need IDs.
Status: NEW → RESOLVED
Last Resolved: 10 years ago
Resolution: --- → FIXED | https://bugzilla.mozilla.org/show_bug.cgi?id=170243 | CC-MAIN-2018-09 | refinedweb | 1,303 | 61.67 |
An atomic box is a graphical object that helps you encapsulate graphical, truth table, MATLAB®, and Simulink® functions in a separate namespace. Atomic boxes are supported only in Stateflow® charts in Simulink models..
An atomic box looks opaque and includes the label Atomic in the upper left corner. If you use a linked atomic box from a library, the label Link appears in the upper left corner.
This example shows how to use a linked atomic box to reuse a graphical function across multiple charts and models.
The function
GetTime is defined in a chart in the library model
sf_timer_utils_lib. The graphical function returns the simulation time in C charts where the equivalent MATLAB® function
getSimulationTime is not available.
The model
sf_timer_function_calls consists of two charts with a similar structure. Each chart contains a pair of states (
A and
B) and an atomic box (
Time) linked from the library chart. The entry action in state
A calls the function
GetTime and stores its value as
t0. The condition guarding the transition from
A to
B calls the function again and compares its output with the parameter
T.
The top model
sf_timer_modelref reuses the timer function in multiple referenced blocks. Because there are no exported functions, you can use more than one instance of the referenced block in the top model.
Atomic boxes combine the functionality of normal boxes and atomic subcharts. Atomic boxes:
Improve the organization and clarity of complex charts.
Support usage as library links.
Support the generation of reusable code.
Allow mapping of inputs, outputs, parameters, data store memory, and input events.
Atomic boxes contain only functions. They cannot contain states. Adding a state to an atomic box results in a compilation-time error.
To call a function that resides in an atomic box from a location outside the atomic box, use dot notation to specify its full path:
atomic_box_name.function_name
Makes clear the dependency on the function in the linked atomic box.
Avoids pollution of the global namespace.
Does not affect the efficiency of generated code.
You can create an atomic box by converting an existing box or by linking a chart from a library model. After creating the atomic box, update the mapping of variables by right-clicking the atomic box and selecting Subchart Mappings. For more information, see Map Variables for Atomic Subcharts and Boxes.
To create a container for your functions that allows for faster debugging and code generation workflows, convert an existing box into an atomic box. In your chart, right-click a normal box and select Group & Subchart > Atomic Subchart. The label Atomic appears in the upper left corner of the box.
The conversion process gives the atomic box its own copy of every data object that the box accesses in the chart. Local data is copied as data store memory. The scope of other data, including input and output data, does not change.
Note
If a box contains any states or messages, you cannot convert it to an atomic box.
To create a collection of functions for reuse across multiple charts and models, create a link from a library model. Copy a chart in a library model and paste it to a chart in another model. If the library chart contains only functions and no states, it appears as a linked atomic box with the label Link in the upper left corner.
This modeling method minimizes maintenance of reusable functions. When you modify the atomic box in the library, your changes propagate to the links in all charts and models.
If the library chart contains any states, then it appears as a linked atomic subchart in the chart. For more information, see Create Reusable Subcomponents by Using Atomic Subcharts.
Converting an atomic box back to a normal box removes all of its variable mappings by merging subchart-parented data objects with the chart-parented data to which they map.
If the atomic box is a library link, right-click the atomic box and select Library Link > Disable Link.
To convert an atomic box to a subcharted box, right-click the atomic box and clear the Group & Subchart > Atomic Subchart check box.
To convert the subcharted box back to a normal box, right-click the subchart and clear the Group & Subchart > Subchart check box.
If necessary, rearrange graphical objects in your chart.
You cannot convert an atomic box to a normal box if:
The atomic box maps a parameter to an expression other than a
single variable name. For example, mapping a parameter
data1 to one of these expressions prevents
the conversion of an atomic box to a normal box:
3
data2(3)
data2 + 3
Both of these conditions are true:
The atomic box contains MATLAB functions or truth table functions that use MATLAB as the action language.
The atomic box does not map each variable to a variable of the same name in the main chart.
Suppose that you want to test a sequence of changes to a library of functions. The functions are part of a chart that contains many states or several levels of hierarchy, so recompiling the entire chart can take a long time. If you define the functions in an atomic box, recompilation occurs for only the box and not for the entire chart. For more information, see Reduce the Compilation Time of a Chart.
Suppose that you have a set of functions for use in multiple charts and models. The functions reside in the library model to enable easier configuration management. To use the functions in another model, you can either:
Configure the library chart to export functions and create a link to the library chart in the model.
Link the library chart as an atomic box in each chart of the model.
Models that use these functions can appear as referenced blocks in a top model. When the functions are exported, you can use only one instance of that referenced block for each top model. For more information, see Model Reference Requirements and Limitations.
With atomic boxes, you can avoid this limitation. Because there are no exported functions in the charts, you can use more than one instance of the referenced block in the top model.
Suppose that multiple people are working on different parts of a chart. If you store each library of functions in a linked atomic box, different people can work on different libraries without affecting the other parts of the chart. For more information, see Divide a Chart into Separate Units.
Suppose that you want to inspect code generated by Simulink Coder™ or Embedded Coder® manually for a specific function. You can specify that the code for an atomic box appears in a separate file to avoid searching through unrelated code. For more information, see Generate Code from Atomic Subcharts. | https://au.mathworks.com/help/stateflow/ug/reusing-graphical-functions-with-atomic-boxes.html | CC-MAIN-2020-40 | refinedweb | 1,129 | 63.49 |
Win a copy of Barcodes with iOS this week in the iOS forum or Core Java for the Impatient in the Java 8 forum!
Florin Florentin wrote:
One more question.
Florin Florentin wrote:Thank you for your quick replies and you help Mark. You helped me .
One more question. In this case,
public class Main {
static <T> void test(T t1,T t2) {}
public static void main(String[] args) {
Object o1 = new Object();
String o2 = new String("");
test(o1, o2);
}
}
the code compiles, so why here the second argument must not be a supertype of the first type? Maybe it's a silly question but i don't know why.
Jim Hoglund wrote:Try this. The compiler error generated shows that 'T' is set to type Object.
public class Today_0809 {
static <T> T test(T t1, T t2){
return t1;
}
public static void main(String[] args) {
String str = test(new String("--"), new Object());
}
}Jim ... ...
Mark Moge wrote:Ok I was wrong. And after some thinking ... in my opinion it works like this: if you have a generic declaration of a method
static <T> void test(T t1,T t2) {}
T from a metod works as T extends superclass of all arguments in method.
So you can put any type as argument:
static <T> T test(T t1,T t2) {
return t1;
}
Number n1 = test(new Integer(8), new Double(8.8)); //ok
Object o1 = test(new Object(), new String("go")); //ok
Kevin Kilbane wrote:
When the arguments are of the same type, the generic type is that type
Kevin Kilbane wrote:
When more than one argument is passed to a method that defines a generic type and those arguments are not all of the same type then the compiler treats the generic type as Object
Kevin Kilbane wrote:
1. If the arguments are all of the same type then the compiler treats T as that type e.g. if all arguments are Strings then T is treated as a String. Simple.
2. If the arguments are of different types but are in the same class hierarchy then the compiler treats T as the highest in the hierarchy e.g. if the arguments are of types Integer and Number then T is treated as Number.
Kevin Kilbane wrote:
3. If the arguments are of different types and are NOT in the same class hierarchy then the compiler treats T as Object e.g. if the arguments are of types Integer and String then T is treated as Object by the compiler. | http://www.coderanch.com/t/505983/java-programmer-SCJP/certification/Type-inference-generic-mehods | CC-MAIN-2015-11 | refinedweb | 420 | 78.59 |
Announcing TypeScript 3.5
Daniel
Today we’re happy to announce the availability of TypeScript 3.5!
If you’re new to TypeScript, it’s a language that builds on JavaScript that adds optional static types. TypeScript code gets type-checked to avoid common mistakes like typos and accidental coercions, and then gets transformed by a program called the TypeScript compiler. The compiler strips out any TypeScript-specific syntax and optionally transforms your code to work with older browsers, leaving you with clean, readable JavaScript that can run in your favorite browser or Node.js. Built on top of all this is also a language service which uses all the type information TypeScript has to provide powerful editor functionality like code completions, find-all-references, quick fixes, and refactorings. All of this is cross-platform, cross-editor, and open source.
TypeScript also provides that same tooling for JavaScript users, and can even type-check JavaScript code typed with JSDoc using the
checkJs flag. If you’ve used editors like Visual Studio or Visual Studio Code with
.js files, TypeScript powers that experience, so you might already be using TypeScript!
To get started with TypeScript, you can get it through NuGet, or through npm with the following command:
npm install -g typescript
You can also get editor support by
- Downloading for Visual Studio 2019 and 2017 (for version 15.2 or later)
- Using tonight’s Visual Studio Code Insiders (or by manually setting the editor up).
- Sublime Text 3 via PackageControl.
Support for other editors will likely be rolling in in the near future.
Let’s explore what’s new in 3.5!
- Compiler and Language
- Editor Tooling
- Breaking changes
- What’s next
Speed improvements
TypeScript 3.5 introduces several optimizations around type-checking and incremental builds.
Type-checking speed-ups
Much of the expressivity of our type system comes with a cost – any more work that we expect the compiler to do translates to longer compile times. Unfortunately, as part of a bug fix in TypeScript 3.4 we accidentally introduced a regression that could lead to an explosion in how much work the type-checker did, and in turn, type-checking time. The most-impacted set of users were those using the styled-components library. This regression was serious not just because it led to much higher build times for TypeScript code, but because editor operations for both TypeScript and JavaScript users became unbearably slow.
Over this past release, we focused heavily on optimizing certain code paths and stripping down certain functionality to the point where TypeScript 3.5 is actually faster than TypeScript 3.3 for many incremental checks. Not only have compile times fallen compared to 3.4, but code completion and any other editor operations should be much snappier too.
If you haven’t upgraded to TypeScript 3.4 due to these regressions, we would value your feedback to see whether TypeScript 3.5 addresses your performance concerns!
--incremental improvements
TypeScript 3.4 introduced a new
--incremental compiler option. This option saves a bunch of information to a
.tsbuildinfo file that can be used to speed up subsequent calls to
tsc.
TypeScript 3.5 includes several optimizations to caching how the state of the world was calculated – compiler settings, why files were looked up, where files were found, etc. In scenarios involving hundreds of projects using TypeScript’s project references in
--build mode, we’ve found that the amount of time rebuilding can be reduced by as much as 68% compared to TypeScript 3.4!
For more details, you can see the pull requests to
The
Omit helper type
Much of the time, we want to create an object that omits certain properties. It turns out that we can express types like that using TypeScript’s built-in
Pick and
Exclude helpers. For example, if we wanted to define a
Person that has no
location property, we could write the following:
type Person = { name: string; age: number; location: string; }; type RemainingKeys = Exclude<keyof Person, "location">; type QuantumPerson = Pick<Person, RemainingKeys>; // equivalent to type QuantumPerson = { name: string; age: number; };
Here we “subtracted”
"location" from the set of properties of
Person using the
Exclude helper type. We then picked them right off of
Person using the
Pick helper type.
It turns out this type of operation comes up frequently enough that users will write a helper type to do exactly this:
type Omit<T, K extends keyof any> = Pick<T, Exclude<keyof T, K>>;
Instead of making everyone define their own version of
Omit, TypeScript 3.5 will include its own in
lib.d.ts which can be used anywhere. The compiler itself will use this
Omit type to express types created through object rest destructuring declarations on generics.
For more details, see the pull request on GitHub to add
Omit, as well as the change to use
Omit for object rest.
Improved excess property checks in union types
TypeScript has a feature called excess property checking in object literals. This feature is meant to detect typos for when a type isn’t expecting a specific property.
type Style = { alignment: string, color?: string }; const s: Style = { alignment: "center", colour: "grey" // ^^^^^^ error! };
In TypeScript 3.4 and earlier, certain excess properties were allowed in situations where they really shouldn’t have been. For instance, TypeScript 3.4 permitted the incorrect
name property in the object literal even though its types don’t match between
Point and
Label.
type Point = { x: number; y: number; }; type Label = { name: string; }; const thing: Point | Label = { x: 0, y: 0, name: true // uh-oh! };
Previously, a non-disciminated union wouldn’t have any excess property checking done on its members, and as a result, the incorrectly typed
name property slipped by.
In TypeScript 3.5, the type-checker at least verifies that all the provided properties belong to some union member and have the appropriate type, meaning that the sample above correctly issues an error.
Note that partial overlap is still permitted as long as the property types are valid.
const pl: Point | Label = { x: 0, y: 0, name: "origin" // okay };
The
--allowUmdGlobalAccess flag
In TypeScript 3.5, you can now reference UMD global declarations like
export as namespace foo;
from anywhere – even modules – using the new
--allowUmdGlobalAccess flag.
This feature might require some background if you’re not familiar with UMD globals in TypeScript. A while back, JavaScript libraries were often published as global variables with properties tacked on – you sort of hoped that nobody picked a library name that was identical to yours. Over time, authors of modern JavaScript libraries started publishing using module systems to prevent some of these issues. While module systems alleviated certain classes of issues, they did leave users who were used to using global variables out in the rain.
As a work-around, many libraries are authored in a way that define a global object if a module loader isn’t available at runtime. This is typically leveraged when users target a module format called “UMD”, and as such, TypeScript has a way to describe this pattern which we’ve called “UMD global namespaces”:
export as namespace preact;
Whenever you’re in a script file (a non-module file), you’ll be able to access one of these UMD globals.
So what’s the problem? Well, not all libraries conditionally set their global declarations. Some just always create a global in addition to registering with the module system. We decided to err on the more conservative side, and many of us felt that if a library could be imported, that was probably the the intent of the author.
In reality, we received a lot of feedback that users were writing modules where some libraries were consumed as globals, and others were consumed through imports. So in the interest of making those users’ lives easier, we’ve introduced the
allowUmdGlobalAccess flag in TypeScript 3.5.
For more details, see the pull request on GitHub.
Smarter union type checking
When checking against union types, TypeScript typically compares each constituent type in isolation. For example, take the following code:
type S = { done: boolean, value: number } type T = | { done: false, value: number } | { done: true, value: number }; declare let source: S; declare let target: T; target = source;
Assigning
source to
target involves checking whether the type of
source is assignable to
target. That in turn means that TypeScript needs to check whether
S:
{ done: boolean, value: number }
is assignable to
T:
{ done: false, value: number } | { done: true, value: number }
Prior to TypeScript 3.5, the check in this specific example would fail, because
S isn’t assignable to
{ done: false, value: number } nor
{ done: true, value: number }. Why? Because the
done property in
S isn’t specific enough – it’s
boolean whereas each constituent of
T has a
done property that’s specifically
true or
false. That’s what we meant by each constituent type being checked in isolation: TypeScript doesn’t just union each property together and see if
S is assignable to that. If it did, some bad code could get through like the following:
interface Foo { kind: "foo"; value: string; } interface Bar { kind: "bar"; value: number; } function doSomething(x: Foo | Bar) { if (x.kind === "foo") { x.value.toLowerCase(); } } // uh-oh - luckily TypeScript errors here! doSomething({ kind: "foo", value: 123, });
So clearly this behavior is good for some set of cases. Was TypeScript being helpful in the original example though? Not really. If you figure out the precise type of any possible value of
S, you can actually see that it matches the types in
T exactly.
That’s why in TypeScript 3.5, when assigning to types with discriminant properties like in
T, the language actually will go further and decompose types like
S into a union of every possible inhabitant type. In this case, since
boolean is a union of
true and
false,
S will be viewed as a union of
{ done: false, value: number } and
{ done: true, value: number }.
For more details, you can see the original pull request on GitHub.
Higher order type inference from generic constructors
In TypeScript 3.4, we improved inference for when generic functions that return functions like so:
function compose<T, U, V>( f: (x: T) => U, g: (y: U) => V): (x: T) => V { return x => g(f(x)) }
took other generic functions as arguments, like so:
function arrayify<T>(x: T): T[] { return [x]; } type Box<U> = { value: U } function boxify<U>(y: U): Box<U> { return { value: y }; } let newFn = compose(arrayify, boxify);
Instead of a relatively useless type like
(x: {}) => Box<{}[]>, which older versions of the language would infer, TypeScript 3.4’s inference allows
newFn to be generic. Its new type is
<T>(x: T) => Box<T[]>.
TypeScript 3.5 generalizes this behavior to work on constructor functions as well.
class Box<T> { kind: "box"; value: T; constructor(value: T) { this.value = value; } } class Bag<U> { kind: "bag"; value: U; constructor(value: U) { this.value = value; } } function composeCtor<T, U, V>( F: new (x: T) => U, G: new (y: U) => V): (x: T) => V { return x => new G(new F(x)) } let f = composeCtor(Box, Bag); // has type '<T>(x: T) => Bag<Box<T>>' let a = f(1024); // has type 'Bag<Box<number>>'
In addition to compositional patterns like the above, this new inference on generic constructors means that functions that operate on class components in certain UI libraries like React can more correctly operate on generic class components.
type ComponentClass<P> = new (props: P) => Component<P>; declare class Component<P> { props: P; constructor(props: P); } declare function myHoc<P>(C: ComponentClass<P>): ComponentClass<P>; type NestedProps<T> = { foo: number, stuff: T }; declare class GenericComponent<T> extends Component<NestedProps<T>> { } // type is 'new <T>(props: NestedProps<T>) => Component<NestedProps<T>>' const GenericComponent2 = myHoc(GenericComponent);
To learn more, check out the original pull request on GitHub.
Smart Select
TypeScript 3.5 provides an API for editors to expand text selections farther and farther outward in a way that is syntactically aware – in other words, the editor knows which constructs it should expand out to. This feature is called Smart Select, and the result is that editors don’t have to resort to heuristics like brace-matching, and you can expect selection expansion in editors like Visual Studio Code to “just work”.
As with all of our editing features, this feature is cross-platform and available to any editor which can appropriately query TypeScript’s language server.
Extract to type alias
Thanks to Wenlu Wang (GitHub user @Kingwl), TypeScript supports a useful new refactoring to extract types to local type aliases.
For those who prefer interfaces over type aliases, an issue exists for extracting object types to interfaces as well.
Breaking changes
Generic type parameters are implicitly constrained to
unknown
In TypeScript 3.5, generic type parameters without an explicit constraint are now implicitly constrained to
unknown, whereas previously the implicit constraint of type parameters was the empty object type
{}.
In practice,
{} and
unknown are pretty similar, but there are a few key differences:
{}can be indexed with a string (
k["foo"]), though this is an implicit
anyerror under
--noImplicitAny.
{}is assumed to not be
nullor
undefined, whereas
unknownis possibly one of those values.
{}is assignable to
object, but
unknownis not.
The decision to switch to
unknown is rooted that it is more correct for unconstrained generics – there’s no telling how a generic type will be instantiated.
On the caller side, this typically means that assignment to
object will fail, and methods on
Object like
toString,
toLocaleString,
valueOf,
hasOwnProperty,
isPrototypeOf, and
propertyIsEnumerable will no longer be available.
function foo<T>(x: T): [T, string] { return [x, x.toString()] // ~~~~~~~~ error! Property 'toString' does not exist on type 'T'. }
As a workaround, you can add an explicit constraint of
{} to a type parameter to get the old behavior.
// vvvvvvvvvv function foo<T extends {}>(x: T): [T, string] { return [x, x.toString()] }
From the caller side, failed inferences for generic type arguments will result in
unknown instead of
{}.
function parse<T>(x: string): T { return JSON.parse(x); } // k has type 'unknown' - previously, it was '{}'. const k = parse("...");
As a workaround, you can provide an explicit type argument:
// 'k' now has type '{}' const k = parse<{}>("...");
{ [k: string]: unknown } is no longer a wildcard assignment target
The index signature
{ [s: string]: any } in TypeScript behaves specially: it’s a valid assignment target for any object type. This is a special rule, since types with index signatures don’t normally produce this behavior.
Since its introduction, the type
unknown in an index signature behaved the same way:
let dict: { [s: string]: unknown }; // Was okay dict = () => {};
In general this rule makes sense; the implied constraint of “all its properties are some subtype of
unknown” is trivially true of any object type. However, in TypeScript 3.5, this special rule is removed for
{ [s: string]: unknown }.
This was a necessary change because of the change from
{} to
unknown when generic inference has no candidates. Consider this code:
declare function someFunc(): void; declare function fn<T>(arg: { [k: string]: T }): void; fn(someFunc);
In TypeScript 3.4, the following sequence occurred:
- No candidates were found for
T
Tis selected to be
{}
someFuncisn’t assignable to
argbecause there are no special rules allowing arbitrary assignment to
{ [k: string]: {} }
- The call is correctly rejected
Due to changes around unconstrained type parameters falling back to
unknown (see above),
arg would have had the type
{ [k: string]: unknown }, which anything is assignable to, so the call would have incorrectly been allowed. That’s why TypeScript 3.5 removes the specialized assignability rule to permit assignment to
{ [k: string]: unknown }.
Note that fresh object literals are still exempt from this check.
const obj = { m: 10 }; // okay const dict: { [s: string]: unknown } = obj;
Depending on the intended behavior of
{ [s: string]: unknown }, several alternatives are available:
{ [s: string]: any }
{ [s: string]: {} }
object
unknown
any
We recommend sketching out your desired use cases and seeing which one is the best option for your particular use case.
Improved excess property checks in union types
As mentioned above, TypeScript 3.5 is stricter about excess property checks on constituents of union types.
We have not witnessed examples where this checking hasn’t caught legitimate issues, but in a pinch, any of the workarounds to disable excess property checking will apply:
- Add a type assertion onto the object (e.g.
{ myProp: SomeType } as ExpectedType)
- Add an index signature to the expected type to signal that unspecified properties are expected (e.g.
interface ExpectedType { myProp: SomeType; [prop: string]: unknown })
Fixes to unsound writes to indexed access types
TypeScript allows you to represent the operation of accessing a property of an object via the name of that property:
type A = { s: string; n: number; }; function read<K extends keyof A>(arg: A, key: K): A[K] { return arg[key]; } const a: A = { s: "", n: 0 }; const x = read(a, "s"); // x: string
While commonly used for reading values from an object, you can also use this for writes:
function write<K extends keyof A>(arg: A, key: K, value: A[K]): void { arg[key] = value; }
In TypeScript 3.4, the logic used to validate a write was much too permissive:
function write<K extends keyof A>(arg: A, key: K, value: A[K]): void { // ??? arg[key] = "hello, world"; } // Breaks the object by putting a string where a number should be write(a, "n", "oops");
In TypeScript 3.5, this logic is fixed and the above sample correctly issues an error.
Most instances of this error represent potential errors in the relevant code. If you are convinced that you are not dealing with an error, you can use a type assertion instead.
lib.d.ts includes the
Omit helper type
TypeScript 3.5 includes a new
Omit helper type. As a result, any global declarations of
Omit included in your project will result in the following error message:
Duplicate identifier 'Omit'.
Two workarounds may be used here:
- Delete the duplicate declaration and use the one provided in
lib.d.ts.
- Export the existing declaration from a module file or a namespace to avoid a global collision. Existing usages can use an
importor explicit reference to your project’s old
Omittype.
Object.keys rejects primitives in ES5
In ECMAScript 5 environments,
Object.keys throws an exception if passed any non-
object argument:
// Throws if run in an ES5 runtime Object.keys(10);
In ECMAScript 2015,
Object.keys returns
[] if its argument is a primitive:
// [] in ES6 runtime Object.keys(10);
This is a potential source of error that wasn’t previously identified. In TypeScript 3.5, if
target (or equivalently
lib) is
ES5, calls to
Object.keys must pass a valid
object.
In general, errors here represent possible exceptions in your application and should be treated as such. If you happen to know through other means that a value is an
object, a type assertion is appropriate:
function fn(arg: object | number, isArgActuallyObject: boolean) { if (isArgActuallyObject) { const k = Object.keys(arg as object); } }
Note that this change interacts with the change in generic inference from
{} to
unknown, because
{} is a valid
object whereas
unknown isn’t:
declare function fn<T>(): T; // Was okay in TypeScript 3.4, errors in 3.5 under --target ES5 Object.keys(fn());
What’s next?
As with our last release, you can see our 3.6 iteration plan document, as well as the feature roadmap page to get an idea of what’s coming in the next version of TypeScript. We’re anticipating 3.6 will bring a better experience for authoring and consuming generators, support for ECMAScript’s private fields proposal, and APIs for build tools to support fast incremental builds and projects references. Also of note is the fact that as of TypeScript 3.6, our release schedule will be switching to a cadence of every 3 months (instead of every 2 months as it has been until this point). We believe this will make it easier for us to validate changes with partner teams and
We hope that this version of TypeScript makes you faster and happier as you code. Let us know what you think of this release on Twitter, and if you’ve got any suggestions on what we can do better, feel free to file an issue on GitHub.
Happy hacking!
– Daniel Rosenwasser and the TypeScript team
Really great stuff, love the performance and union improvements.
i hope the focus of typescript improvements will move to FP for a while soon; pipe operators |>, >> and <<, as well as improved support for partial application, for instance would be great assets.
I m watching these good articles pass by from years on news feeds. Today i m collecting breath n time to write..
Would the auther of this n similar articles agree?.. Satya made us march behind his chakra engine like piedpiper just to ditch us to competitor engine. We kept marching behind powershell n this typescript, and not once we were warned python was creaping pass by under our chairs to become hit only to be woken up to reality where ms again telling us to change our religion from ms to another. We were never warned nor any articles passed by on news feed where ms would have fortold us to adapt accordingly, i/many kept in false hopes that ms has arrow in its quiver againt any competition. I dont want to whine/distract on winpho n other things i just need focus on these above 2 examples. Thanks.
Thanks for article! But guys, I think you should seriously considering looking into issues instead of adding new stuff. For example currently on Github it’s ~3750 open issues. Also I found that some “maintainers” often close inconvinient issues.
I eager to see Typescript to be part of ecosystem not just yet-another-transpiler-compiler, thus you have to hear what developers want to be fixed. For instance, Typescript still can’t produce VALID ESM output, e.g. with .js extenstion (adding it manually not an option, as it will break other build systems). There’re lots of such tickets, most of them are already closed. That question was raised since 2017, and not yet solved.
I absolutely have to agree with Alex. Three years on and no-one has bothered adding the “override” keyword to the language to ensure signature correctness of overridden methods. But one of the arguments seems to be that introducing a compiler flag to enforce overrides (with a fall-back of C++ style overrides as the “soft” default) is going to occupy too much of someone’s “mental space”. And then I see the fixes put into 3.5 that are probably irrelevant to 95% of people and clearly show that someone has the “mental space” to solve these kinds of issues.The other comment I’d make is that working software engineers are quite pragmatic about how they build code (I am, anyway). The language elements you are talking about here are in the 1% case for me, whereas an override keyword is in my 80% case (ie. it applies to what we do all day, every day, as a bread-and-butter engineering concept).
You have some awesome features in the language _but_ it seems like you are too focussed on the “cool and nerdy” stuff rather than stuff regular developers will benefit from _most_.
Here to bring some kudos and positive wibes! I was using TS in its 0.8-1.X heyday (2015-16), and I found it verbose, slow and hard to get the syntax playing nicely. Refactoring in Webstorm was also lightyears away from what I was used to in Java, and quite error prone (unit tests ftw!). I never used it again after that project, until I recently saw the 3.4 release.
That finally sparked my interest again: much faster (incremental) compiles makes for faster feedback (maybe even Gary Bernhard will be pleased?), sane co-living with functional programming, corporate backed intellisense that even works in Vim (!) and wonderful type inference! This is great work and I applaud you for getting to where you are today. I am finally getting back in the fold 🙂
Though the amount of typings needed for just a simple project as react-redux makes me think you still have some way to go before creating these types doesn’t absorb huge amounts of time 🙂
Trying to sanely convert an existing enterprise project in a gradual, file-by-file manner is still something I think you should address to increase adoption. Dedicating a two week sprint just to add types is not going to fly in the kind of startups I work in (good intentions aside).
Carl, you can certainly slowly introduce types to your project one file at a time. The VSCode did that for enabling strict null checks ( see ) – you can do something very similar for your own project.
It concerns me that the change from `{}` to `unknown` is so significant that TypeScript should have used version 4.0 instead of 3.5. I’m not a fan of breaking changes between minor versions within a major version number. Anyone upgrading from 3.x to 3.5 can reasonably assume that there are no _breaking changes_ and it’s safe to upgrade and get the performance improvements. And since so many use cases are affected by the breaking change, it’s really a major change.
> We believe this will make it easier for us to validate changes with partner teams and@Daniel Rosenwasser, don’t leave us hanging. What did you mean to say here? 👏
Thanks for prioritizing the Styled Component speed fixes – the weather’s getting too hot for pegged CPUs 😀 We will give 3.5 a spin shortly.
Awesome work guys 🙂
Hi guys, just wanted to report we’re still pegging CPUs with MacOS, VSCode, Create React App, TS, ESLint, and Styled Components on 3.5.1. I’ll open a GitHub issue as well if it will be helpful. I’ll do some more investigating first to isolate the cause. | https://devblogs.microsoft.com/typescript/announcing-typescript-3-5/ | CC-MAIN-2021-25 | refinedweb | 4,352 | 61.36 |
Introduction:. Tones may be defined but I did not provide an option to save the tones as wave files, however, there is an example of how to programmatically add the tones defined using the application included within the code.
If you don't need to generate tones for any specific purpose, the application is an excellent tool for annoying your friends, family, roommates, and co-workers.
Getting Started:
In order to get started, unzip the included project and open the solution in the Visual Studio 2005 environment (the code was written in 2003 and it works fine in 2003 but I have updated it to 2005). You will note that the project contains two significant files: CannedWCAtones.cs and frmMain.cs. The canned WCA tones class provides an example of how to programmatically add tones to a C# project while the form main class provides the GUI and the code necessary to drive the application.:
Speech 5.1 SDK:
You may also obtain a couple of additional voices (the SDK includes Microsoft Mary, Microsoft Mike, and Microsoft Sam) by downloading and the Microsoft Reader and additional TTS components found on this URL: (not required, but you will gain two additional voices if you do add these to your system)
You do not need to activate the reader for this to work, however, you can't install the additional voices unless you have the reader installed. start, you will see this form appear:
Figure 4: The main form of the Tone Generator Application
Looking at the form note that it contains a tabbed panel with four tabs. The first tab is used to define a sine wave tone. To try it out, key in 700 for the start frequency, key in 1700 for the end frequency, key in 85 for the duration of the tone in milliseconds, key in 20 for the steps (this defines how many steps the frequency will be divided into between the start and end frequencies, a small number makes the tone choppy, a larger value makes it smoother, too large a number and it slows it down), key in 15 for the dwell (how long the tone is rested between repetitions, and key in 20 for the number of repetitions. Click on "Play Tone" and you should be listening to a MIL-STD-411 Warning tone similar to what a military pilot would hear during an emergency.
Next, click on the Saw Tooth tab. The dialog box will show these control options: (figure 5)
Figure 5: Saw Tooth Tone Definition Options
Key in the values indicated in the figure and click on "Play Tone", you should hear something that sounds like a cheap telephone ringing. This definition is used to bounce two tones off one another, the first frequency is played first (oddly enough) and its duty cycle in this case is set to 50:50. The second tone's frequency and duty cycle are defined next, and lastly the number of repetitions is set. You can play around with the values and note the impact on the tone.
Next, click on the Presets tab. This update the dialog to this configuration: (Figure 6)
Figure 6: Preset Tones
The presets are tones link to tones defined programmatically and stored in the Canned WCA class file. Pop open the combo box, select a tone and hit the "Play Preset" button to listen to one of these tones.
The last tab to look at is the Voice tab. Click on Voice and examine the dialog: (figure 7)
Figure 7: Voice Dialog Options
To give this option a try, key some text into the message area, select a speaker from the combo box, and hit the play tone button. If you click on the Save to WAV file checkbox, you will be prompted to define a storage location for the wave file generated by voice synthesis. Note that you may use punctuation to alter the tempo of the voice's playback, for example, "ENGINE FIRE! LEFT!" will not play the same as "ENGINE FIRE LEFT" or "ENGINE FIRE, LEFT". I have found that misspelling some of the words is a useful way to alter the playback to sound more human so if you care to try that out you will find that "ENGIN FIRE LEFT" does not sound the same as "ENGINE FIRE LEFT". To try this out, key this phrase into the message text box and click "Play Tone", "japanese,,,,,jaupa-knees".
Close the application and let's take a look at the code driving it.
The Code for frmMain.cs.
First, take a look at the references in the class, they are as follows:
using System;
using System.Drawing;
using System.Collections;
using System.ComponentModel;
using System.Windows.Forms;
using System.Data;
using System.Runtime.InteropServices;
using SAMPLETTSENGLib;
using System.Threading;
using SpeechLib;
namespace ToneGenerator
{
/// <summary>
/// This application is used to define and test tones for use in the
/// warning/caution/advisory system. It supports sine and sawtooth
/// wave forms and can be used to define tones in accordance with
/// MIL-STD-411F (or other non-standardized aural alerts)
/// </summary>
public class Form1 : System.Windows.Forms.Form
{
Note that the project is using "SAMPLETTSENGLib" and "SpeechLib"; these are based upon the speech related references added earlier in the project; check the used libraries and verify that you have each of those listed. Following the references is a standard namespace and class definition. Note the clever use of Form1 as a class name; you might want to be a little more creative than that.
Next up is an important item. Following the class definition, note the following code:
[DllImport("kernel32.dll")]
private static extern bool Beep( int frequency, int duration );
This code is critical to the operation of the application as the entire shooting match is largely based on manipulating the old "Beep" call from the kernel32 DLL. This reference is added into the project with this DLL import. Also note that "Beep" accepts two arguments, frequency and duration. Aside from using the "Beep" call in this application, you could do something interesting like build yourself a piano keyboard using the same call.
Following the DLL import, note the following code:
private SpeechVoiceSpeakFlags SpFlags = SpeechVoiceSpeakFlags.SVSFlagsAsync;
private SpVoice Voice = new SpVoice();
The second line of this snippet is creating an instance of a voice, the first line is setting the speak flag (speech mode) to one of the options in the SpeechVoiceSpeakFlags enumeration. In this case, it is set to the asynchronous mode. The enumeration contains the following options:
public enum SpeechVoiceSpeakFlags
SVSFUnusedFlags = -128,
SVSFDefault = 0,
SVSFlagsAsync = 1,
SVSFPurgeBeforeSpeak = 2,
SVSFIsFilename = 4,
SVSFIsXML = 8,
SVSFIsNotXML = 16,
SVSFPersistXML = 32,
SVSFNLPMask = 64,
SVSFNLPSpeakPunc = 64,
SVSFVoiceMask = 127,
}
The next thing worth looking at is initialization, scroll down until to you find this code:
public Form1()
//
// Required for Windows Form Designer support
//
InitializeComponent();
// Get System Voices
string strVoice;
foreach (SpeechLib.ISpeechObjectToken sot in Voice.GetVoices("", "") )
strVoice = sot.GetDescription(0); //'The token's name
cboVox.Items.Add(strVoice);
}
if (cboVox.Items.Count <= 0)
MessageBox.Show(this, "This system does not contain Text-to-
Speech capability.");
}
Following the Initialize component call, the application checks for existing voices on the machine and it adds any it finds to the Voice tab's Speaker combo box. If no voices are found, the user is alerted to the absence of voice capability.
The next function of interest is the call to play the sine wave tones from the first tab; this function is the button click event handler for the "Play Tone" button the Sine tab. The code looks like this:
private void btnPlayAdhoc_Click(object sender, System.EventArgs e)
try
{
// Set vars for sine wave tone
int startFreq = Convert.ToInt32(txtStartFreq.Text);
int endFreq = Convert.ToInt32(txtEndFreq.Text);
int duration = Convert.ToInt32(txtDuration.Text);
int dwell = Convert.ToInt32(txtDwell.Text);
int steps = Convert.ToInt32(txtSteps.Text);
int reps = Convert.ToInt32(txtRepetitions.Text);
int diff = Math.Abs(startFreq - endFreq);
diff = Convert.ToInt32(diff/duration);
for (int rep=0; rep<reps; rep++)
// tone
int CurrentFreq = startFreq;
for(int i=0; i<steps-1; i++)
{
Beep(CurrentFreq, Convert.ToInt32(duration/steps));
CurrentFreq = CurrentFreq + diff;
}
// dwell
Thread.Sleep(dwell);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message.ToString(), "Error");
In this section of code, the sine tab's values are collected from the text boxes, converted to integer values and used to set a few local variables used within the function. After the variables are set, the code determines the difference between the high and low ends of the sweep and assigns that value to an integer variable. This variable is divided by the duration in order to derive the amount of frequency change added to each step's increase in the frequency value.
Following this set up, the function loops through the specified number of repetitions and on each repetition, the tone is set back to the original value and then the steps are looped and the Beep function is called with the current frequency and duration specified. The current frequency value is then increased by the frequency difference calculated for each step.
Lastly the function rests by calling thread sleep set the current dwell value in milliseconds; here I am just using the tread sleep call for rough timing. This will insert the dwell time before the next repetition of the tone is initiated.
The next bit of code worth looking at is used to play the saw tooth form tone, it is as follows:
private void btnPlaySawtooth_Click(object sender, System.EventArgs e)
try
{
int freq1 = Convert.ToInt32(txtFreq1.Text);
int duration1 = Convert.ToInt32(txtDuration1.Text);
int dwell1 = Convert.ToInt32(txtDwell1.Text);
int freq2 = Convert.ToInt32(txtFreq2.Text);
int duration2 = Convert.ToInt32(txtDuration2.Text);
int dwell2 = Convert.ToInt32(txtDwell2.Text);
int reps = Convert.ToInt32(txtSawToothReps.Text);
for (int i = 0; i < reps; i++)
{
Beep(freq1, duration1);
Thread.Sleep(dwell1);
Beep(freq2, duration2);
Thread.Sleep(dwell2);
}
}
catch (Exception ex)
MessageBox.Show(ex.Message.ToString(), "Error");
This code works in a manner consistent with the approach applied to the sine wave tone; the only difference being that the function loops through each of the specified repetitions and plays the two frequencies entered back to back with the dwell time separating them.
Next up is the code used to both save and play the synthesized voice, that code looks like this:
private void btnPlayVox_Click(object sender, System.EventArgs e)
try
if (chkSaveToWavFile.Checked)
Voice.Speak(txtSpeakText.Text, SpFlags);
SaveFileDialog sfd = new SaveFileDialog();
sfd.Filter = "All files (*.*)|*.*|wav files (*.wav)|*.wav";
sfd.Title = "Save to a wave file";
sfd.FilterIndex = 2;
sfd.RestoreDirectory = true;
if (sfd.ShowDialog()== DialogResult.OK)
{
SpeechStreamFileMode SpFileMode =
SpeechStreamFileMode.SSFMCreateForWrite;
SpFileStream SpFileStream = new SpFileStream();
SpFileStream.Open(sfd.FileName, SpFileMode, false);
Voice.AudioOutputStream = SpFileStream;
Voice.Speak(txtSpeakText.Text, SpFlags);
Voice.WaitUntilDone(Timeout.Infinite);
SpFileStream.Close();
}
else
try
}
catch (System.Exception ex)
MessageBox.Show(this, ex.Message.ToString() +
ex.StackTrace.ToString());
catch(Exception error)
MessageBox.Show(error.Message.ToString(), "Error",
MessageBoxButtons.OK, MessageBoxIcon.Error);
}}
This code first looks to see if the user has checked the "Save to WAV file" checkbox and, if they have, it opens a file save dialog used to capture the file name and location the user would like to use to save the wave file. After that the function saves the audio output to the specified file location.
If the user did not check the "Save to WAV file" checkbox, the function merely plays the voice message.
Lastly, if you take a look at the CannedWCAtones.cs file, you will see examples used to generate tones programmatically. In these examples, the functions work exactly like those mentioned previously, however, when the function is initialized, the function sets all of its internal values to those hard coded into the application rather than setting the variables to some user specified value.
Conclusion.
That is all there is to it; you may wish to examine the project files to take a look at the rest of the code in context but the relevant points have all been addressed in the document.
©2016
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/article/aural-alert-generator-voice-and-tones/ | CC-MAIN-2016-50 | refinedweb | 2,010 | 55.24 |
Events are a key part of general C# development. In Unity, they can sometimes be overlooked and less optimal options are used. If you’ve never used events, you’ll be happy to know that they’re easy to get started with and add a lot of value to your project architecture.
Before I cover the Event system in detail, I’d like to go over some of the common alternatives you’ll see in Unity projects.
BroadcastMessage
The BroadcastMessage method is part of the MonoBehaviour class. It allows you to send a loosely coupled message to all active gameobjects.
BroadcastMessage is simple to use and accomplishes the task of sending a message from one gameObject to another.
The biggest issue with BroadcastMessage is how it refers to the method that will be called. Because it takes a string as the first parameter, there is no compile time verification that the method actually exists.
It can be prone to typos, and it introduces danger when refactoring / renaming your methods. If the method name in the string doesn’t match the method name in your classes, it will no-longer work, and there’s no obvious indication of this beyond your game failing to work properly.
The other parameters are also not tightly coupled, so there’s no parameter verification. If your method required a string and an int, but you call BroadcastMessage with two strings, your call will fail, and again there’s no compile time indication of this issue.
Another big drawback to BroadcastMessage is the fact that it only broadcasts to children. For the example given, the UI Text would only receive the message if it’s a child of the player.
Update Polling
Another common technique I see in Unity projects is polling properties in the Update() method.
Polling in an Update() method is fine for many things, but generally not the cleanest way to deal with cross gameobject communication.
using UnityEngine; using UnityEngine.UI; namespace UpdatePolling { public class PlayerHPBar : MonoBehaviour { private Text _text; private Player _player; private void Awake() { _text = GetComponent<Text>(); _player = FindObjectOfType<Player>(); } private void Update() { _text.text = _player.HP.ToString(); } } }
In this example, we update the text of our UI every frame to match the HP value of the player. While this works, it’s not very extensible, it can be a bit confusing, and it requires us to make variables public that may not really need to be.
It also get a lot messier using Update Polling when we want to only do things on a specific situation. For updating the player HP UI, we may not mind doing it every frame, but imagine we want to play a sound effect when the player takes damage too, suddenly this method becomes much more complicated.
Events
If you’ve never coded an event, you’ve probably at least hooked into one before.
One built in Unity event I’ve written about recently is the SceneManager.sceneLoaded event.
This event fires whenever a new scene is loaded.
You can register for the sceneLoaded event and react to it like this.
using UnityEngine; using UnityEngine.SceneManagement; public class SceneLoadedListener : MonoBehaviour { private void Start() { SceneManager.sceneLoaded += HandleSceneLoaded; } private void HandleSceneLoaded(Scene arg0, LoadSceneMode arg1) { string logMessage = string.Format("Scene {0} loaded in mode {1}", arg0, arg1); Debug.Log(logMessage); } }
Each event can have a different signature, meaning the parameters the event will pass to your method can vary.
In the example, we can see that the sceneLoaded event passes two parameters. The parameters for this event are the Scene and the LoadSceneMode.
Creating your own Events
Now, let’s see how we can build our own events and tie them into the example before.
using UnityEngine; namespace UsingEvents { public class Player : MonoBehaviour { public delegate void PlayerTookDamageEvent(int hp); public event PlayerTookDamageEvent OnPlayerTookDamage; public int HP { get; set; } private void Start() { HP = 10; } public void TakeDamage() { HP -= 1; if (OnPlayerTookDamage != null) OnPlayerTookDamage(HP); } } }
In this example, we create a new delegate named PlayerTookDamageEvent which takes a single integer for our HP value.
Then we use the delegate to create an event named OnPlayerTookDamage.
Now, when we take damage, our Player class actually fires our new event so all listeners can deal with it how they like.
We have to check our event for null before calling it. If nothing has registered with our event yet, and we don’t do a null check, we’ll get a null reference exception.
Next, we need to register for this newly created event. To do that, we’ll modify the PlayerHPBar script like this.
using UnityEngine; using UnityEngine.UI; namespace UsingEvents { public class PlayerHPBar : MonoBehaviour { private Text _text; private void Awake() { _text = GetComponent<Text>(); Player player = FindObjectOfType<Player>(); player.OnPlayerTookDamage += HandlePlayerTookDamage; } private void HandlePlayerTookDamage(int hp) { _text.text = hp.ToString(); } } }
To test our event, let’s use this PlayerDamager.cs script.
using UnityEngine; using System.Collections; namespace UsingEvents { public class PlayerDamager : MonoBehaviour { private void Start() { StartCoroutine(DealDamageEvery5Seconds()); } private IEnumerator DealDamageEvery5Seconds() { while (true) { FindObjectOfType<Player>().TakeDamage(); yield return new WaitForSeconds(5f); } } } }
This script calls the TakeDamage() method on the Player every 5 seconds.
TakeDamage() then calls the OnPlayerTookDamage event which causes our PlayerHPBar to update the text.
Let’s see how this looks in action.
We can see here that the players HP is decreasing and the text is updating.
Sidebar – Script Execution Order
You may have noticed something strange though. The first value shown is -1. This caught me off guard the first time, but the cause is visible in the code.
Before you continue reading, take a look and see if you can find it.
….
In our Player.cs script, we set the HP to 10 in the Start() method.
Our PlayerDamager.cs script also starts dealing damage in the Start() method.
Because our script execution order isn’t specified, the PlayerDamager script happens to be running first.
Since an int in c# defaults to a value of Zero, when TakeDamage() is called, the value changes to -1.
Fix #1
There are a few ways we can fix this.
We could change the script execution order so that Player always executes before PlayerDamager.
In the Script Execution Order screen, you can set the order as a number. Lower numbered scripts are run before higher numbered scripts.
Fix #2 – Better
While this would work, there’s a much simpler and cleaner option we can use.
We can change the Player.cs script to set our HP in the Awake() method instead of Start().
Awake() is always called before Start(), so script execution order won’t matter.
Back to Events
So now we have our event working, but we haven’t quite seen a benefit yet.
Let’s add a new requirement for our player. When the player takes damage, let’s play a sound effect that indicates that they were hurt.
PlayerImpactAudio
To do this, we’ll create a new script named PlayerImpactAudio.cs
using UnityEngine; namespace UsingEvents { [RequireComponent(typeof(AudioSource))] public class PlayerImpactAudio : MonoBehaviour { private AudioSource _audioSource; private void Awake() { _audioSource = GetComponent<AudioSource>(); FindObjectOfType<Player>().OnPlayerTookDamage += PlayAudioOnPlayerTookDamage; } private void PlayAudioOnPlayerTookDamage(int hp) { _audioSource.Play(); } } }
Notice on line 13, we register for the same OnPlayerTookDamage event that we used in the PlayerHPBar.cs script.
One of the great things about events is that they allow multiple registrations.
Because of this, we don’t need to change the Player.cs script at all. This means we’re less likely to break something.
If you’re working with others, you’re also less likely to need to do a merge with another developers code.
We’re also able to more closely adhere to the single responsibility principal.
The single responsibility principle states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. All itsservices should be narrowly aligned with that responsibility.
Robert C. Martin expresses the principle as follows:[1] “A class should have only one reason to change.”
The GameObject & AudioSource
You may have also noticed line 5 which tells the editor that this component requires another component to work.
Here, we’re telling it that we need an AudioSource on the gameobject. We do this because line 11 looks for an AudioSource to play our sound effect from.
For this example, we’ve created a new gameobject and attached both our PlayerImpactAudioSource.cs script and an AudioSource.
Then we need to assign an AudioClip. As you can see in the example, I’ve recorded my own sound effect and named it “oww”.
Now when we hit play, a sound effect triggers every time the TakeDamage() method is called.
Actions – Use these!
If this is all new to you, don’t worry, we’re almost done, and it gets easier.
Actions were added to c# with .net 2.0. They’re meant to simplify events by removing some of the ceremony around them.
Let’s take another quick look at how we defined the event in our Player.cs script.
public delegate void PlayerTookDamageEvent(int hp); public event PlayerTookDamageEvent OnPlayerTookDamage;
First, we define the event signature, declaring that our event will pass one integer named “hp”.
Then we declare the event so that other code can register for it.
With Actions, we can cut that down to one line.
using System; using UnityEngine; namespace UsingActions { public class Player : MonoBehaviour { public Action<int> OnPlayerTookDamage; public int HP { get; set; } private void Start() { HP = 10; } public void TakeDamage() { HP -= 1; if (OnPlayerTookDamage != null) OnPlayerTookDamage(HP); } } }
That’s all there is to it. Nothing else needs to change. All the other scripts work exactly the same. We’ve simply reduced the amount of code needed for the same effect.
While this is great for the majority of events, there is one reason you may want to still use the occasional event. That would be when your event has many parameters that can be easily confused with each other. My recommendation for that situation however is to re-think your events and see if the amount of data you’re passing is larger than it needs to be. If you really need to pass a lot of data to an event though, another great option is to create a new class or struct and fill it with your data, then pass that into the event.
Final Tip
Before I go, it’s also worth mentioning that you can have multiple parameters to an Action. To do this, simply comma separate your parameter types like this.
public Action<int, string, MyCustomClass> OnSomethingWithThreeParameters { get; set; }
If you have questions or comments about Events or using Action, please leave a comment or send me an email. | https://unity3d.college/2016/10/05/unity-events-actions-delegates/ | CC-MAIN-2020-29 | refinedweb | 1,787 | 64.91 |
This tutorial is out of date and no longer maintained.
These are the 3 tips I found pretty handy while working with TypeScript:
Though I discovered these while working with Angular applications, all tips are not Angular-specific, it’s just TypeScript.
I like interfaces. However, I don’t like to import them every time. Although Visual Studio Code has an auto-import feature, I don’t like my source files been “polluted” by multiple lines of imports - just for the purpose of strong typing.
This is how we do it normally.
// api.model.ts export interface Customer { id: number; name: string; } export interface User { id: number; isActive: boolean; }
// using the interfaces import { Customer, User } from './api.model'; // this line will grow longer if there's more interfaces used export class MyComponent { cust: Customer; }
By using namespace, we can eliminate the need to import interfaces files.
// api.model.ts namespace ApiModel { export interface Customer { id: number; name: string; } export interface User { id: number; isActive: boolean; } }
// using the interfaces export class MyComponent { cust: ApiModel.Customer; }
Nice right? Using namespace also helps you to better organize and group the interfaces. Please note that you can split the namespace across many files.
Let’s say you have another file called
api.v2.model.ts. You add in new interfaces, but you want to use the same namespace.
// api.v2.model.ts namespace ApiModel { export interface Order { id: number; total: number; } }
You can definitely do so. To use the newly created interface, just use them as the previous example.
// using the interfaces with same namespaces but different files export class MyComponent { cust: ApiModel.Customer; order: ApiModel.Order; }
Here is the detail documentation on TypeScript namespacing.
The other way to eliminate import is to create a TypeScript file end with
.d.ts. “d” stands for declaration file in TypeScript (more explanation here).
// api.model.d.ts // you don't need to export the interface in d file interface Customer { id: number; name: string; }
Use it as normal without the need to import it.
// using the interfaces of d file export class MyComponent { cust: Customer; }
I recommend solution 1 over solution 2 because:
It’s quite common where you will use the same interface for CRUD. Let’s say you have a customer interface, during creation, all fields are mandatory, but during an update, all fields are optional. Do you need to create two interfaces to handle this scenario?
Here is the interface
// api.model.ts export interface Customer { id: number; name: string; age: number; }
Partial is a type to make properties an object optional. The declaration is included in the default d file
lib.es5.d.ts.
// lib.es5.d.ts type Partial<T> = { [P in keyof T]?: T[P]; };
How can we use that? Look at the code below:
// using the interface but make all fields optional import { Customer } from './api.model'; export class MyComponent { cust: Partial<Customer>; / ngOninit() { this.cust = { name: 'jane' }; // no error throw because all fields are optional } }
If you don’t find
Partial declaration, you may create a d file yourself (e.g. util.d.ts) and copy the code above into it.
For more advanced type usage of TypeScript, you can read here.
As a JavaScript-turned-TypeScript developer, one might find TypeScript error is annoying sometimes. In some scenarios, you just want to tell TypeScript, “Hey, I know what I am doing, please leave me alone.”.
@ts-ignorecomment
From TypeScript version 2.6 onwards, you can do so by using comment
@ts-ignore to suppress errors.
For example, TypeScript will throw error “Unreachable code detected” in this following code:
if (false) { console.log('x'); }
You can suppress that by using comment
@ts-ignore
if (false) { // @ts-ignore console.log('x'); }
Find out more details here: TypeScript 2.6 release
Of course, I will suggest you always try to fix the error before ignoring it!
TypeScript is good for your (code) health. It has pretty decent documentation. I like the fact that they have comprehensive
What's new documentation for every release. It’s an open source project in GitHub if you would like to contribute. The longer I work with TypeScript, the more I love it and appreciate it.
That’s it, happy coding!! | https://www.digitalocean.com/community/tutorials/3-useful-typescript-tips-for-angular | CC-MAIN-2022-33 | refinedweb | 706 | 58.08 |
I hesitate to submit this, but I don't think this merits the usual "what do you
expect from doubles?" response. This is with:
Reading specs from /usr/local/lib/gcc-lib/i686-pc-linux-gnu/3.3.1/specs
Configured with: ../configure --prefix=/usr/local --enable-
languages=c,c++,f77,objc,java,ada --enable-threads=posix --enable-shared --
enable-__cxa_atexit --with-system-zlib
Thread model: posix
gcc version 3.3.1
and the program:
#include <stdio.h>
int main ()
{
double d = 0.1;
double e = 5;
double f = e / d;
int i = (int) f;
int j = (int) (e / d);
printf ("f = %lf\n", f);
printf ("i = %d\n", i);
printf ("j = %d\n", j);
return 0;
}
Notice that i and j are the integer result of 5 / 0.1 but that the intermediate
double f is used to initialise i. I can just about except that 0.1 is
represented by 0.10000000000000001 according to gdb. What does surprise me is
that i and j can have different values:
$ gcc double.cpp && ./a.out
f = 50.000000
i = 50
j = 49
$ gcc -O double.cpp && ./a.out
f = 50.000000
i = 50
j = 50
$ gcc -O -ffloat-store double.cpp && ./a.out
f = 50.000000
i = 50
j = 49
As you can see, i != j unless I use raw optimisation. I can just about except
that the result of the arithmetic is not exactly 50, but surely i should always
be equal to j? In other words, I think i==j==50 or i==j==49 are correct
outcomes but i!=j is not.
This is caused by x86's rounding modes and excessive precision so it is a dup of bug 323.
*** This bug has been marked as a duplicate of 323 ***
Thanks for the quick response. I see that floating-point bugs are frequently
raised and closed against 323.
I'm not 100% sure it is exactly the same but 100% sure that I'm stuck. The
suggestion for 323 is to use -ffloat-store. Unfortunately, for this problem it
actually makes things worse (when used with optimisation) and makes no
difference (when used without optimisation). It also suggests to change the fp
rounding mode, but the gcc info doesn't seem to mention anything for that with
x86.
The M$ compiler has no problem with my code. How does it do it? How can I
make gcc do it? If there is no way to make gcc do it, then surely it is a bug
or at least a missing feature? | https://gcc.gnu.org/bugzilla/show_bug.cgi?id=11892 | CC-MAIN-2018-43 | refinedweb | 425 | 77.13 |
Aligning the fastText vectors of 78 languagesAligning the fastText vectors of 78 languages
Facebook recently open-sourced word vectors in 89 languages. However these vectors are monolingual; meaning that while similar words within a language share similar vectors, translation words from different languages do not have similar vectors. In a recent paper at ICLR 2017, we showed how the SVD can be used to learn a linear transformation (a matrix), which aligns monolingual vectors from two languages in a single vector space. In this repository we provide 78 matrices, which can be used to align the majority of the fastText languages in a single space.
This readme explains how the matrices should be used. We also present a simple evaluation task, where we show we are able to successfully predict the translations of words in multiple languages. Our procedure relies on collecting bilingual training dictionaries of word pairs in two languages, but remarkably we are able to successfully predict the translations of words between language pairs for which we had no training dictionary!.
Note that since we released this repository Facebook have released an additional 204 languages; however the word vectors of the original 90 languages have not changed, and the transformations provided in this repository will still work. If you would like to learn your own alignment matrices, we provide an example in align_your_own.ipynb.
If you use this repository, please cite:
Offline bilingual word vectors, orthogonal transformations and the inverted softmax
Samuel L. Smith, David H. P. Turban, Steven Hamblin and Nils Y. Hammerla
ICLR 2017 (conference track)
TLDR, just tell me what to do!TLDR, just tell me what to do!
Clone a local copy of this repository, and download the fastText vectors you need from here. I'm going to assume you've downloaded the vectors for French and Russian in the text format. Let's say we want to compare the similarity of "chat" and "кот". We load the word vectors:
from fasttext import FastVector fr_dictionary = FastVector(vector_file='wiki.fr.vec') ru_dictionary = FastVector(vector_file='wiki.ru.vec')
We can extract the word vectors and calculate their cosine similarity:
fr_vector = fr_dictionary["chat"] ru_vector = ru_dictionary["кот"] print(FastVector.cosine_similarity(fr_vector, ru_vector)) # Result should be 0.02
The cosine similarity runs between -1 and 1. It seems that "chat" and "кот" are neither similar nor dissimilar. But now we apply the transformations to align the two dictionaries in a single space:
fr_dictionary.apply_transform('alignment_matrices/fr.txt') ru_dictionary.apply_transform('alignment_matrices/ru.txt')
And re-evaluate the cosine similarity:
print(FastVector.cosine_similarity(fr_dictionary["chat"], ru_dictionary["кот"])) # Result should be 0.43
Turns out "chat" and "кот" are pretty similar after all. This is good, since they both mean "cat".
Ok, so how did you obtain these matrices?Ok, so how did you obtain these matrices?
Of the 89 languages provided by Facebook, 78 are supported by the Google Translate API. We first obtained the 10,000 most common words in the English fastText vocabulary, and then use the API to translate these words into the 78 languages available. We split this vocabulary in two, assigning the first 5000 words to the training dictionary, and the second 5000 to the test dictionary.
We described the alignment procedure in this blog. It takes two sets of word vectors and a small bilingual dictionary of translation pairs in two languages; and generates a matrix which aligns the source language with the target. Sometimes Google translates an English word to a non-English phrase, in these cases we average the word vectors contained in the phrase.
To place all 78 languages in a single space, we align every language to the English vectors (the English matrix is the identity).
Right, now prove that this procedure actually worked...Right, now prove that this procedure actually worked...
To prove that the procedure works, we can predict the translations of words not seen in the training dictionary. For simplicity we predict translations by nearest neighbours. So for example, if we wanted to translate "dog" into Swedish, we would simply find the Swedish word vector whose cosine similarity to the "dog" word vector is highest.
First things first, let's test the translation performance from English into every other language. For each language pair, we extract a set of 2500 word pairs from the test dictionary. The precision @n denotes the probability that, of the 2500 target words in this set, the true translation was one of the top n nearest neighbours of the source word. If the alignment was completely random, we would expect the precision @1 to be around 0.0004.
As you can see, the alignment is consistently much better than random! In general, the procedure works best for other European languages like French, Portuguese and Spanish. We use 2500 word pairs, because of the 5000 words in the test dictionary, not all the words found by the Google Translate API are actually present in the fastText vocabulary.
Now let's do something much more exciting, let's evaluate the translation performance between all possible language pairs. We exhibit this translation performance on the heatmap below, where the colour of an element denotes the precision @1 when translating from the language of the row into the language of the column.
We should emphasize that all of the languages were aligned to English only. We did not provide training dictionaries between non-English language pairs. Yet we are still able to succesfully predict translations between pairs of non-English languages remarkably accurately.
We expect the diagonal elements of the matrix above to be 1, since a language should translate perfectly to itself. However in practice this does not always occur, because we constructed the training and test dictionaries by translating common English words into the other languages. Sometimes multiple English words translate to the same non-English word, and so the same non-English word may appear multiple times in the test set. We haven't properly accounted for this, which reduces the translation performance.
Intriquingly, even though we only directly aligned the languages to English, sometimes a language translates better to another non-English language than it does to English! We can calculate the inter-pair precision of two languages; the average precision from language 1 to language 2 and vice versa. We can also calculate the English-pair precision; the average of the precision from English to language 1 and from English to language 2. Below we list all the language pairs for which the inter-pair precision exceeds the English-pair precision:
All of these language pairs share very close linguistic roots. For instance the first pair above are Bosnian and Serbo-Croatian; Bosnian is a variant of Serbo-Croatian. The second pair is Russian and Ukranian; both east-slavic languages. It seems that the more similar two languages are, the more similar the geometry of their fastText vectors; leading to improved translation performance.
How do I know these matrices don't change the monolingual vectors?How do I know these matrices don't change the monolingual vectors?
The matrices provided in this repository are orthogonal. Intuitively, each matrix can be broken down into a series of rotations and reflections. Rotations and reflections do not change the distance between any two points in a vector space; and consequently none of the inner products between word vectors within a language are changed, only the inner products between the word vectors of different languages are affected.
ReferencesReferences
There are a number of great papers on this topic. We've listed a few of them below:
- Enriching word vectors with subword information
Bojanowski et al., 2016
- Offline bilingual word vectors, orthogonal transformations and the inverted softmax
Smith et al., ICLR 2017
- Exploiting similarities between languages for machine translation
Mikolov et al., 2013
- Improving vector space word representations using multilingual correlation
Faruqui and Dyer, EACL 2014
- Improving zero-shot learning by mitigating the hubness problem
Dinu et al., 2014
- Learning principled bilingual mappings of word embeddings while preserving monolingual invariance
Artetxe et al., EMNLP 2016
Training and test dictionariesTraining and test dictionaries
A number of readers have expressed an interest in the training and test dictionaries we used in this repository. We would have liked to upload these, however, while we have not taken legal advice, we are concerned that this could be interpreted as breaking the terms of the Google Translate API.
LicenseLicense
The transformation matrices are distributed under the Creative Commons Attribution-Share-Alike License 3.0. | https://www.ctolib.com/Babylonpartners-fastText_multilingual.html | CC-MAIN-2018-34 | refinedweb | 1,412 | 54.12 |
am using the Google Chrome Kiosk mode to display some internal websites used at our company.
The Android 10 Devices are implemented in Enterprise Mode with Microsoft Intune MDM.
There I can create a Shortcut through the managed Google Play Store – where I can set up the rough appearnce of the kiosk Mode.
So far so good. But I am stuck at customizing it further, which is very crucial in given scenario. – Sometimes its needed to actually close the browser and start with a new session. – But the session should be retained when simply switching to another app for example.
The switching works, but there is no “Exit Browser” button, instead it has “Share” and “open in Google Chrome” features which are not needed and could be also removed.
Any Ideas?
1.materials
apparmor policy reference
2.my profile
#include <tunables/global>profile docker-test flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
deny /data/** rwl,
deny /usr/bin/top mrwklx,
deny /usr/bin/hello mrwklx,
deny network,
file,
capability,
deny network inet tcp,
deny network bind inet tcp src 192.168.1.1:80 dst 170.1.1.0:80,
}
3.my error
syntax error, unexpected TOK_ID, expecting TOK_END_OF_RULE
the error comes from the last line which contains specific ip_addr, I test it on ubuntu18.04 and my kernel version is 5.4.0-42-generic, apparmor version is 3.0.1 which I compiled from source.
I created my theme as Luma child but I cannot really understand what’s the best way to customize the css. Until now I just wrote all of my styles in _extend.less file but it’s a lot of stuff for just 1 file I think! So…
What’s the best way and upgrade-proof to extend the Magento Luma theme?
Where can I find a specific tutorial or course which clearly explain this? I tried to read the official doc but it’s not clear to me.
I wish to edit my default theme header, change the logo etc.
When I click customize there are almost zero options for this. I have a few basic plugins installed but I cannot see how that would affect this.
What should i do?
We have migrated 3 custom lists from SP 2013 to classic team site inside SP online, now the 3 lists have the following main settings:-
so now the expected behaviour is that the list views will render using the modern UI + But when we create/edit the list items the Create & Edit forms will render using the classic experience. Now this is working as expected for 2 lists. but is failing for the third list.. where on the third list the Create & Edit forms will render using the modern UI, although they have customized the forms using script editor web parts.. any advice on this?
Right now I m receiving this mail format i want to show all data in sort form Please any one support me
(Submitted on Tue, 03/30/2021 – 10:23 Submitted by: Shabana Submitted /values ar/e: I want to register for: Yes *Company * Oratione *Name * Loremipsum *Phone Number * 5 Email random@random.com (1) *preferred contact method * Phone CMS software jumla (1) random@random.com)
I’m making a game with Unity. I wanted the player script to not have some features in some scenes but couldn’t remove them from the script because I needed them for other scenes.
So far, I just unpacked those player prefabs, created new scripts for each of them and removed the lines that I didn’t want to have. I know this was not the best idea but I don’t know any other way. One of its problems is that it makes managing the scripts hard and confusing.
Is there a way that I can change them without creating new ones? Since I’m going to switch to the new Input system, I want to avoid the possible problems and headaches around these scripts. I also did this (creating a new script) for pause menu script as well. I provided two examples below to make the situation more clear.
Example A) The player can use a shotgun in one of the scenes and I chose to not let him to use the knife in there. So I created a new script for that player and copied and pasted the original script (which had the melee attack ability) and then removed those lines from the new script.
Example B) Pause menu can disable the health bar game object whenever the escape button is pressed. But for one of the levels, I don’t want to use the health bar, so I pasted the code in a new pause menu script and removed the line that disables the bar.
We are evaluating OpenID for our needs, and I am having trouble finding concise answers to a few questions. We have a current user database with all pertinent info, including hashing mechanisms for passwords. The current app uses this database for simple authentication.
Thanks!
I am currently working on a Magento store.
There is a 3rd party price tag that shows when viewing a product.
The original price is sometimes displayed as plain text next to the image of the product. When this is the case the 3rd party price tag works fine. However, sometimes the plain text original price isn’t there, instead the price is in a different element, overlapping the “add to cart” button.
When the latter is the case, the 3rd party price tag wont appear (presumably because it can’t find a price to display).
I am currently trying to modify the stores theme so that it will check for the presence of the text price tag and if it is not there then instead use the price from the button.
In doing this I’m stuck, it seems non of the changes I make to the theme are showing up when I reload the page. I’ve checked to make sure I’m modifying the right theme.
Even just adding a simple div or text block, I’m looking seeing anything come through on the live site.
The data I’m editing is in: app/design/frontend/efv1/default/template
Efv1 is confirmed as the currently active theme.
With normal price tag –
(1):
With normal price tag missing –
(2): | https://proxies-free.com/tag/customize/ | CC-MAIN-2021-21 | refinedweb | 1,064 | 71.44 |
Entity Framework Core Quick Overview
Entity Framework (EF) Core is a lightweight, extensible,.
EF Core supports many database engines, see Database Providers for details.
If you like to learn by writing code, we'd recommend one of our Getting Started guides to get you started with EF Core.
What is new in EF Core
If you are familiar with EF Core and want to jump straight into the details of the latest releases:
- What is new in EF Core 2.1 (currently in preview)
- What is new in EF Core 2.0 (the latest released version)
- Upgrading existing applications to EF Core 2.0
Get Entity Framework Core
Install the NuGet package for the database provider you want to use. E.g. to install the SQL Server provider in cross-platform development using
dotnet tool in the command line:
dotnet add package Microsoft.EntityFrameworkCore.SqlServer
Or in Visual Studio, using the Package Manager Console:
Install-Package Microsoft.EntityFrameworkCore.SqlServer
See Database Providers for information on available providers and Installing EF Core for more detailed installation steps.
The Model
With EF Core, data access is performed using a model. A model is made up of entity classes and a derived context that represents a session with the database, allowing you to query and save data. See Creating a Model to learn more.
You can generate a model from an existing database, hand code a model to match your database, or use EF Migrations to create a database from your model (and evolve it as your model changes over time).
using Microsoft.EntityFrameworkCore; using System.Collections.Generic; namespace Intro { public class BloggingContext : DbContext { public DbSet<Blog> Blogs { get; set; } public DbSet<Post> Posts { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=MyDatabase;Trusted_Connection=True;"); } } public class Blog { public int BlogId { get; set; } public string Url { get; set; } public int Rating { get; set; } public List<Post> Posts { get; set; } } public class Post { public int PostId { get; set; } public string Title { get; set; } public string Content { get; set; } public int BlogId { get; set; } public Blog Blog { get; set; } } }
Querying
Instances of your entity classes are retrieved from the database using Language Integrated Query (LINQ). See Querying Data to learn more.
using (var db = new BloggingContext()) { var blogs = db.Blogs .Where(b => b.Rating > 3) .OrderBy(b => b.Url) .ToList(); }
Saving Data
Data is created, deleted, and modified in the database using instances of your entity classes. See Saving Data to learn more.
using (var db = new BloggingContext()) { var blog = new Blog { Url = "" }; db.Blogs.Add(blog); db.SaveChanges(); } | https://docs.microsoft.com/en-gb/ef/core/ | CC-MAIN-2018-22 | refinedweb | 430 | 53.21 |
/* * Copyright (c) 2000 Silicon Graphics, Inc. All Rights Reserved. * * General Public License along * with this program; if not, write the Free Software Foundation, Inc., 59 * Temple Place - Suite 330, Boston MA 02111-1307, USA. * * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy, * Mountain View, CA 94043, or: * * * * For further information regarding this notice, see: * * * */ /* $Id: f00f.c,v 1.2 2001/02/28 17:42:00 nstraz Exp $ */ /* * This is a simple test for handling of the pentium f00f bug. * It is an example of a catistrophic test case. If the system * doesn't correctly handle this test, it will likely lockup. */ #include <signal.h> #include <stdio.h> #include <stdlib.h> #ifdef __i386__ /* * an f00f instruction */ char x[5] = { 0xf0, 0x0f, 0xc7, 0xc8 }; void sigill (int sig) { printf ("SIGILL received from f00f instruction. Good.\n"); exit (0); } int main () { void (*f) () = (void *) x; signal (SIGILL, sigill); printf ("Testing for proper f00f instruction handling.\n"); f (); /* * we shouldn't get here, the f00f instruction should trigger * a SIGILL or lock the system. */ exit (1); } #else /* __i386__ */ int main () { printf ("f00f bug test only for i386\n"); exit (0); } #endif /* __i386__ */ | http://lwn.net/2001/0329/a/ltp-f00f.php3 | crawl-002 | refinedweb | 189 | 76.72 |
0
I have the following code:
import java.util.*; public class Final{ private double guess = 1; public Final(double x){ root(x); } public double root(double b){ double rez = -1; if(Math.abs(guess*guess - b) < 0.001){ rez = guess; } else{ guess = (2/guess + guess)/2; rez = root(b); } return rez; } public double rooting(double d){ while(Math.abs(guess*guess - d) > 0.001){ System.out.println(guess); guess = (2/guess + guess)/2; } return guess; } public static void main(String[] args){ System.out.println(new Final(2)); }
in other words, the two methods do the same job: they find the square root of a positive integer using the Newton's method. And when I call the methods inside the main method using the following code:
Final x = new Final(); x.root(2); x.rooting(2);
they both work fine. However, when calling the methods inside the constructor, using such code as:
System.out.println(new Final(2));
what I get in the screen is: Final@9cab16.
Finally, I am aware that this is perhaps out of bounds of a practical question, but I would like to know what is going on.
Anybody with an answer is very much thanked! | https://www.daniweb.com/programming/software-development/threads/307496/under-the-hood-problem | CC-MAIN-2018-13 | refinedweb | 199 | 67.35 |
A re-frame effects handler for coordinating the kind of async control flow which often happens on ap...
Available items
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
This re-frame library coordinates a set of asynchronous, stateful tasks which have dependencies (and consequently need to be ordered).
Using this library, you can coordinate the kind of asynchronous control flow which is often necessary to successfully boot a re-frame application "up" into a functioning state, while also gracefully handling failures.
As an example, imagine an application which, during startup, has to connect with a backend server via a web socket, load some user settings data, and then a user portfolio, while also connecting to Intercom, but not until the user settings were loaded, while remembering that any of these processes might fail because of network problems.
So, it has a similar intent to mount or component, but it dovetails with, and leverages the event driven nature of re-frame's architecture.
Technically, this library implements an Effect Handler, keyed
:async-flow. It has a declarative, data oriented design.
Add the following project dependency:
Requires re-frame >= 0.8.0
In your app's main entry function, we want to initiate the boot process:
clj (defn ^:export main [] (dispatch-sync [:boot]) ;;
Why the use ofdispatch-sync, rather thandispatch?
Well,dispatch-syncis convenient here because it ensures thatapp-dbis synchronously initialised before we start mounting components/views (which subscribe to state). Usingdispatchwould work too, except it runs the handler later. So, we'd have to then code defensively in our subscriptions and views, guarding against having an uninitialisedapp-db.
This is the only known case where you should usedispatch-syncoverdispatch(other than in tests).
Step 3. Registration And Use
In the namespace where you register your event handlers, perhaps calledevents.cljs, you have 3 things to do.
First, add this require to thens:clj (ns app.events (:require ... [day8.re-frame.async-flow-fx] ;;
Because we never subsequently use thisrequire, it appears redundant. But its existence will cause the:async-floweffect handler to self-register with re-frame, which is important to everything that follows.
Second, write a function which returns a declarative description (as a data structure) of the async flow required, like this:clj (defn boot-flow [] {:first-dispatch [:do-X] ;; what event kicks things off ? :rules [ ;; a set of rules describing the required flow {:when :seen? :events :success-X :dispatch [:do-Y]} {:when :seen? :events :success-Y :dispatch [:do-Z]} {:when :seen? :events :success-Z :halt? true} {:when :seen-any-of? :events [:fail-X :fail-Y :fail-Z] :dispatch [:app-failed-state] :halt? true}]})Try to read eachruleline as an English sentence. When this event happens, dispatch another event. The rules above combine to run tasks X, Y and Z serially, like dominoes. Much more complicated scenarios are possible. Full particulars of this data structure are provided below in the Tutorial section.
Third, write the event handler for:boot:
Remember that(dispatch-sync [:boot])in step 2. We are now writing and registering the associated event handler.
This event handler will do two things: 1. It goes though an initial synchronous series of tasks which get app-db into the right state 2. It kicks off a multistep asynchronous flow(reg-event-fx ;; note the -fx :boot ;; usage: (dispatch [:boot]) See step 3 (fn [_ _] {:db (-> {} ;; do whatever synchronous work needs to be done task1-fn ;; ?? set state to show "loading" twirly for user?? task2-fn) ;; ?? do some other simple initialising of state :async-flow (boot-flow)})) ;; kick off the async process
Notice at that last line. This library provides the "effect handler" which implements:async-flow. It reads and actions the data structure returned by(boot-flow).
Testing
Unit tests use standard cljs.test
To run tests in a browserlein watch
To run the tests with Karmanpm install karma-cli -g # install the global CLI Karma tool lein ci
Problem Definition
When an App boots, it performs a set of tasks to initialise itself.
Invariably, there are dependencies between these tasks, like task1 has to run before task2.
Because of these dependencies, "something" has to coordinate how tasks are run. Within the clojure community, a library like Component or mount is often turned to in these moments, but we won't be doing that here. We'll be using an approach which is more re-frame friendly.
Easy
If the tasks are all synchronous, then the coordination can be done in code.
Each task is a function, and we satisfy the task dependencies by correctly ordering how they are called. In a re-frame context, we'd have this:clj (reg-event-db :boot (fn [db] (-> {} task1-fn task2-fn task3-fn)))
and in our app'smainfunction we'd(dispatch [:boot])
Time
But, of course, it is never that easy because some of the tasks will be asynchronous.
A booting app will invariably have to coordinate asynchronous tasks like "open a websocket", "establish a database connections", "load from LocalStore", "GET configuration from an S3 bucket" and "querying the database for the user profile".
Coordinating asynchronous tasks means finding ways to represent and manage time, and time is a programming menace. In Greek mythology, Cronus was the much feared Titan of Time, believed to bring cruelty and tempestuous disorder, which surely makes him the patron saint of asynchronous programming.
Solutions like promises and futures attempt to make time disappear and allow you to program with the illusion of synchronous computation. But time has a tendency to act like a liquid under pressure, finding the cracks and leaking through the abstractions.
Something like CSP (core.async) is more of an event oriented treatment. Less pretending. But... unfortunately more complicated.core.asyncbuilds a little state machine for you, under the covers, so that you can build your own state machine on top of that again via deft use of go loops, channels, gets and puts. Both layers try to model/control time.
In our solution, we'll be using a re-frame variation which hides (most of) the state machine complexity.
Failures
There will also be failures and errors!
Nothing messes up tight, elegant code quite like error handling. Did the Ancient Greeks have a terrifying Titan for the unhappy path too? Ernos? They should have.
When one of the asynchronous startup tasks fails, we must be able to stop the normal boot sequence and put the application in a satisfactory failed state, sending necessary logs and explaining to the user what went wrong - eg: "No Internet connection" or "Couldn't load user portfolio".
Efficiency
And then, of course, there's the familiar pull of efficiency.
We want our app to boot in the shortest possible amount of time. So any asynchronous tasks which can be done in parallel, must be done in parallel.
The boot process is seldom linear, one task after an another. Instead, it involves dependencies like: when task1 has finished, start all of task2, task3 and task4 in parallel. And task5 can be started only when both task2 and task3 has completed successfully. And task6 can start when task3 alone has completed, but we really don't care if it finishes properly - it is non essential to a working app.
So, we need to coordinate asynchronous flows, with complex dependencies, while handling failures. Not easy, but that's why they pay us the big bucks.
As Data Please
Because we program in Clojure, we spend time in hammocks dreamily re-watching Rich Hickey videos and meditating on essential truths like "data is the ultimate in late binding".
So, our solution must involve "programming with data" and be, at once, all synonyms of easy.
In One Place
The control flow should be described in just one place, and easily grokable as a unit.
To put that another way: we do not want a programmer to have to look in multiple places to reconstruct a mental model of the overall control flow.
The Solution
re-frame has events. That's how we roll.
A re-frame application can't step forward in time unless an event happens; unless something does adispatch. Events will be the organising principle in our solution exactly because events are an organising principle within re-frame itself.
Tasks and Events
As you'll soon see, our solution assumes the following about tasks...
If we take an X-ray of an async task, we'll see this event skeleton: - an event is used to start the task - if the task succeeds, an event is dispatched - if the task fails, an event is dispatched
So that's three events: one to start and two ways to finish. Please read that again - its importance is sometimes missed on first reading. Your tasks must conform to this 3 event structure (which is not hard).
Of course, re-frame will route all three events to their registered handler. The actual WORK of starting the task, or handling the errors, will be done in the event handler that you write.
But, here, none of that menial labour concerns us. Here we care only about the coordination of tasks. We care only that task2 is started when task1 finishes successfully, and we don't need to know what task1 or task2 actually do.
To distill that: we care only that thedispatchto start task2 is fired correctly when we have seen an event saying that task1 finished successfully.
When-E1-Then-E2
Read that last paragraph again. It distills further to: when event E1 happens thendispatchevent E2. Or, more pithily again, When-E1-Then-E2.
When-E1-Then-E2 is the simple case, with more complicated variations like: - when both events E1 and E2 have happened, then dispatch E3 - when either events E1 or E2 happens, then dispatch E3 - when event E1 happens, then dispatch both E2 and E3
We call these "rules". A collection of such rules defines a "flow".
Flow As Data
Collectively, a set of When-E1-then-E2 rules can describe the entire async boot flow of an app.
Here's how that might look in data:clj [{:when :seen? :events :success-db-connect :dispatch-n '([:do-query-user] [:do-query-site-prefs])} {:when :seen-both? :events [:success-user-query :success-site-prefs-query] :dispatch [:success-boot] :halt? true} {:when :seen-any-of? :events [:fail-user-query :fail-site-prefs-query :fail-db-connect] :dispatch [:fail-boot] :halt? true} {:when :seen? :events :success-user-query :dispatch [:do-intercom]}]
That's a vector of 4 maps (one per line), where each represents a single rule. Try reading each line as if it was an English sentence and something like this should emerge:when we have seen all of events E1 and E2, then dispatch this other event
The structure of each rule (map) is:{:when W ;; one of: :seen?, :seen-both?, :seen-all-of?, :seen-any-of? :events X ;; either a single keyword or a seq of keywords representing event ids :dispatch Y ;; (optional) single vector (to dispatch) :dispatch-n Z ;; (optional) list of vectors (to dispatch) :halt? true} ;; optional, will teardown the flow after the last event is dispatched
Although optional, only one of :dispatch or :dispatch-n can be specified
In our mythical app, we can't issue a database query until we have a database connection, so the 1st rule (above) says: 1. When:success-db-connectis dispatched, presumably signalling that we have a database connection... 2. then(dispatch [:query-user])and(dispatch [:query-site-prefs])
We have successfully booted when both database queries succeed, so the 2nd rule says: 1. When both success events have been seen (they may arrive in any order), 2. then(dispatch [:success-queries])and cleanup because the boot process is done.
If any task fails, then the boot fails, and the app can't start which means go into a failure mode, so the 3rd rules says: 1. If any one of the various tasks fail... 2. then(dispatch [:fail-boot])and cleanup because the boot process is done.
Once we have user data (from the user-query), we can start the intercom process, so the 4th rules days: 1. When:success-user-queryis dispatched 2. then(dispatch [:do-intercom])
Further Notes:
-
The 4th rule starts "Intercom" once we have user data. But notice that nowhere do we wait for a:success-intercom. We want this process started, but it is not essential for the app's function, so we don't wait for it to complete.
-
The coordination processes never actively participate in handling any events. Event handlers themselves do all that work. They know how to handle success or failure - what state to record so that the twirly thing is shown to users, or not. What messages are shown. Etc.
-
A naming convention for events is adopted. Each task can have 3 associated events which are named as follows::do-*is for starting tasks. Task completion is either:success-*or:fail-*
-
The:halt?value of true means the boot flow is completed. Clean up the flow coordinator. It will have some state somewhere. So get rid of that. And it will have been "sniffing events", so stop doing that, too. You should provide at least one of these in your rules.
-
There's nothing in here about the teardown process as the application is closing. Here we're only helping the boot process.
-
There will need to be something that kicks off the whole flow. In the case above, presumably a(dispatch [:do-connect-db])is how it all starts.
-
A word on Retries. XXX
The Flow Specification
The:async-flowdata structure has the following fields:
:id- optional - an identifier, typically a namespaced keyword. Each flow should have a unique id. Must not clash with the identifier for any event handler (because internally an event handler is registered using this id). If absent,
:async/flowis used. If this default is used then two flows can't be running at once because they'd be using the same id.
db-path- optional - the path within
app-dbwhere the coordination logic should store state. Two pieces of state are stored: the set of seen events, and the set of started tasks. If absent, then state is not stored in app-db and is instead held in an internal atom. We prefer to store state in app-db because we like the philosophy of having all the data in the one place, but it is not essential.
first-dispatch- optional - the event which initiates the async flow. This is often something like the event which will open a websocket or HTTP GET configuration from the server. If omitted, it is up to you to organise the dispatch of any initial event(s).
rules- mandatory - a vector of maps. Each map is a
rule.
A
ruleis a map with the following fields:
:whenone of
:seen?,
:seen-both?.
:seen-all-of?,
:seen-any-of?
:seen?,
:seen-both?and
:seen-all-of?are interchangeable.
:eventseither a single keyword, or a collection of keywords, presumably event ids. a collection can also contain whole event vectors that will be matched, or event predicates that return true or false when passed an event vector.
:dispatchcan be a single vector representing one event to dispatch.
:dispatch-nto dispatch multiple events, must be a coll where each elem represents one event to dispatch.
:dispatch-fncan be a function that accepts the seen event, and returns a coll where each elem represents one event to dispatch.
:halt?optional boolean. If true, the flow enters teardown and stops.
How does async-flow work? It does the following:
:id
:eventsmentioned in
flowrules should be "forwarded" to this event handler, after they have been handled by their normal handlers. So, if the event
:abcwas part of
flowspec, then after
[:abc 1]was handled by its normal handler there would be an additional
(dispatch [:id [:abc 1]])which would be handled the coordinator created in steps 1 and 2.
:db-pathor in local atom, if there is no
:db-path.
flowspecification and the state it internally maintains to work out how it should respond to each newly forwarded event.
:halt?set to
true. Event handler de-registers itself, removes its state, and stops all event sniffing.
Notes: 1. This pattern is flexible. You could use it to implement a more complex FSM coordinator. 2. All the work is done in a normal event handler (dynamically created for you). And it is all organised around events which this event handler processes. So this solution is aligned on re-frame fundamentals.
In some circumstances, it is necessary to hook into not just the event itself, but the data carried in the event. In these cases, functions can be used as an event predicate, or a dispatch rule.
For example, when uploading a file, a success event may return an id which needs to be passed on to a subsequent event.
{:when :seen? :events :upload/success :dispatch-fn (fn [[e id]] [[:remote/file-uploaded id]])}
Or, to dispatch a server error event if a status of 500 or above has been seen
{:when :seen? :events (fn [[e status]] (and (= e :http/response-received) (>= status 500))) :dispatch [:server/error]))
Managing async task flow means managing time, and managing time requires a state machine. You need: - some retained state (describing where we have got to) - events which announce that something has happened or not happened (aka FSM triggers) - a set of rules about transitioning app state and triggering further activity when events arrive.
One way or another you'll be implementing a state machine. There's no getting away from that.
Although there are ways of hiding it!! Redux-saga uses ES6 generator functions to provide the illusion of a synchronous control flow. The "state machine" is encoded directly into the generator function's statements (sequentially, or via
if
then
elselogic, or via
loops). And that's a nice and simple abstraction for many cases. This equivalence between a statem machine and generator functions is further described here.
But, as always, there are trade-offs.
First, the state machine is encoded in javascript "code" (the generator function implements the state machine). In clojure, we have a preference for "programming in data" where possible.
Second, coding (in javascript) a more complicated state machine with a bunch of failure states and cascades will ultimately get messy. Time is like a liquid under pressure and it will force its way out through the cracks in the abstraction. A long history with FSM encourages us to implement state machines in a data driven way (a table driven way).
So we choose data and reject the redux-saga approach (while being mindful of the takeoffs).
But it would be quite possible to create a re-frame version of redux-saga. In ClosureScript we have
core.asyncinstead of generator functions. That is left as an exercise for the motivated reader.
A motivated user might also produce a fully general FSM version of this effects handler. | https://xscode.com/day8/re-frame-async-flow-fx | CC-MAIN-2020-40 | refinedweb | 3,180 | 64.91 |
How to delete a file in Python with examples
In order to delete a file in Python, you have to import the os module. In this tutorial, you will learn how to delete a file in Python with some easy examples. The function we are going to use to delete a file from a directory is os.remove()
Delete a file in Python
Here we are going to show you how to delete a file from a directory with an easy example.
We have a text file in our directory. The file name of the text file is: this_is_file.txt
( os.remove() can delete any type of file, this is not necessary to be a text file )
Now we are going to write a Python program in the same directory to delete the file.
import os os.remove('this_is_file.txt')
If we run this program, this_is_file.txt will be deleted from the directory.
If your file is located in other sub-directory than you can simply put the path instead of the file name to delete the file
import os os.remove('path')
You can use both single quote and double quote to surround the path of the file.
os.remove in Python
Return Type: It does not return any value.
Parameter: Path of the file is to be passed as a parameter surrounded by single quotes or double quotes.
os.rmdir in Python to delete an entire directory
import os os.rmdir('directory')
Return Type: It does not return any value.
Parameter: Directory or directory path is to be passed as a parameter surrounded by single quotes or double quotes.
Special note:
It will only delete an empty directory.
To delete an entire directory with all of its contents you have to use the following:
shutil.rmtree(mydir) | https://www.codespeedy.com/how-to-delete-a-file-in-python-with-examples/ | CC-MAIN-2020-45 | refinedweb | 299 | 73.27 |
Opened 7 years ago
Closed 3 years ago
#11732 closed New feature (wontfix)
ModelAdmin.get_changelist_form should use ModelAdmin.form
Description
When you create a ModelForm to associate with a ModelAdmin, using the 'form' option, this form will not be used on the list_editable form. When I define a .clean() method, I'd like the validation to work both on the changeform and on the changelist.
A quick fix:
def get_changelist_form(self, request, **kwargs): defaults = { + "form": self.form, "formfield_callback": curry(self.formfield_for_dbfield, request=request), } defaults.update(kwargs) return modelform_factory(self.model, **defaults)
Change History (10)
comment:1 Changed 7 years ago by edcrypt
- milestone set to 1.2
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 7 years ago by lukeplant
- Triage Stage changed from Unreviewed to Design decision needed
comment:3 Changed 6 years ago by ubernostrum
- milestone 1.2 deleted
1.2 is feature-frozen, moving this feature request off the milestone.
comment:4 Changed 6 years ago by andybak
- Cc andy@… added
comment:5 Changed 6 years ago by subsume
I agree with lukeplant and yet with raw_id_fields, here we are creating an admin option which affects both.
comment:6 Changed 5 years ago by julien
- Severity set to Normal
- Type set to New feature
comment:7 Changed 5 years ago by julien
- Easy pickings unset
- UI/UX unset
comment:8 Changed 4 years ago by anonymous
Any updates on this ?
comment:9 Changed 3 years ago by pwbdecker@…
Presently to add validation to the change list form I have to override get_changelist_form to return my custom form. I would also like to see the option to be able to use the admin options form.
comment:10 Changed 3 years ago by jacob
- Resolution set to wontfix
- Status changed from new to closed
I'm with Luke: this isn't something we should change; the downsides outweigh the upsides.
That's a really bad idea, and would cause lots of breakages, because many people will be assuming that the Form they have created will be used only for the change form, and not the change list. We need a new attribute like 'changelist_form', if anything. | https://code.djangoproject.com/ticket/11732 | CC-MAIN-2016-22 | refinedweb | 362 | 63.09 |
Python-WebSocket Tutorial -Live Forex Rates.
Let’s get started -Setting up the environment
Before we start we need to get our environment ready, we will install the software required in 3 simple steps.
Setup
- Setup Python
- Install pip
Step 1. Install Python
Step 2. Install pip
For Windows:
pip is installed by defualtFor Linux:
$sudo apt-get install python3-pip
Now we can set up our project
Let's write some code
Inside your directory create a new file testClient.py you can do this in your favorite editor or Notepad/VI if you are just getting started.
As this is a live WebSocket we want the program to continue to run whilst we have a live connection. For this, we use the thread class and the WebSocket run_forever() option.
Import the libs
import websocket
import timetry:
import thread
except ImportError:
import _thread as thread
For this example, we are just going to append the incoming data to a log file so we need to create the file handler.
def on_message(ws, message):
print(message)
f.write("Live fx rates" + message + "\n" )
f.flush()def on_error(ws, error):
print(error)def on_close(ws):
print("### closed ###")def on_open(ws):
def run(*args):
ws.send("{\"userKey\":\"USER_KEY\", \"symbol\":\"GBPUSD\"}")
thread.start_new_thread(run, ())
Now we have the logger and the handler we need to create the WebSocket, we will do this in the main function of the program.
if __name__ == "__main__":
ws = websocket.WebSocketApp("wss://marketdata.tradermade.com/feed",
on_message = on_message,
on_error = on_error,
on_close = on_close)
ws.on_open = on_open
ws.run_forever()
Running the program
For Windows:
$python testClient.pyFor timetry:
import thread
except ImportError:
import _thread as threadf = open("webSocketTester.log", "a")def on_message(ws, message):
print(message)
f.write(message + "\n" )",
on_message = on_message,
on_error = on_error,
on_close = on_close)
ws.on_open = on_open
ws.run_forever()
TraderMade works with companies and developers that demand real-time services and we are able to do that by using AWS (Amazon Web Services) as our data center partner, therefore clients can be assured of continued low-latency and high-frequency products and services.
How Can I Get More Information and Trial For Myself?
For more information contact sales@tradermade.com or Live Chat with a member of the team at.
To trial immediately, just follow the Documentation on our Market Data website using this link | https://tradermade.medium.com/python-websocket-tutorial-live-forex-rates-de8e6d78a75c?source=post_internal_links---------3---------------------------- | CC-MAIN-2022-27 | refinedweb | 386 | 58.79 |
In preparation to installing Virtual Server, and acknowledging the fact that it needs IIS, I would like to know:
TIA
IIS stores the Virtual-to-FileSystem mapping inside of its configuration file, and whenever it receives a HTTP request for a URL in the Virtual namespace, IIS translates it to a resource in the FileSystem namespace using its configuration.
At this point, you can have any other mapping in effect in the FileSystem namespace (for example, NTFS junction points) to do further name mapping, but we are getting a bit more complicated now...
FYI: Virtual Server does not need IIS to administer/function. I use Virtual Server all the time without IIS. See this blog entry:
//David | http://blogs.msdn.com/b/david.wang/archive/2005/12/24/iis-and-virtual-server-conversations.aspx | CC-MAIN-2015-35 | refinedweb | 117 | 56.79 |
:Expensive (Score 1) 105
Comment: Re: Decent (Score 2) 466
Comment: Re: Managers need an algorithm for that? (Score 4, Funny) 210
Yeah, that's pretty much how it goes. I don't normally wear jeans to work though.
In some workplaces that could be considered inappropriate, in others it may depend on your style of underwear.
Comment: Re: Tabs vs Spaces (Score 1) 427
The moment you throw in a few spaces to line something up on a non-tab boundary (say, to align a second line of arguments with the first argument), then you have a mess
It is trivial to avoid this problem by configuring tab-width to 1. Yet again,newbie configuration wins over decades of experience.
Comment: Re:And yet, no one understands Git. (Score 1) 202
Comment: Re:It's the cloud (Score 1) 146
Comment: Re:Android, not quite an Egg but close. (Score 1) 290
Comment: Re:Patent? (Score 1) 111
Comment: Re:Same question as I had more than a decade ago (Score 1) 198
Comment: Re:Brilliant idea (Score 1) 193
Comment: Re:Arduino? Good riddance! (Score 1) 92
#include "mbed.h"
DigitalIn enable(p5);
DigitalOut led(LED1);
int main() {
while(1) {
if(enable) {
led = !led;
}
wait(0.25);
}
}
Comment: Re:Arduino? Good riddance! (Score 1) 92
Comment: Re:You are missing the obvious point! (Score 1) 349
If a person works 35-40 hours a week should they receive the same pay as someone working 45-50 hours? Anyone looking at that should say "No, the person working more hours should receive more pay." but somehow this obvious point eluded you.
We have two people - one who completes a set amount of work in 35 hours, and another who completes the same amount of work in 50 hours. And you want to pay the second person more | http://slashdot.org/~jrumney/tags/notthebest | CC-MAIN-2015-18 | refinedweb | 308 | 73.27 |
I have simple app in C that is using POSIX struct sigevent.
#include <signal.h>
int main(int argc, char *argv[])
{
struct sigevent sig_event;
return 0;
}
When I compile it like this:
gcc test.c
it is fine. When I force C11 mode, it fails:
gcc test.c --std=c11
test.c: In function ‘main’:
test.c:5:21: error: storage size of ‘sig_event’ isn’t known
struct sigevent sig_event;
I'm using gcc 5.2.1 on Ubuntu 15.10. Any ideas what is causing those errors? This problem first occured when I tried to compile example from manual for timer_create() function. Situation was the same, except for much more errors.
The header
<signal.h> is part of standard C. But POSIX adds more to it. Since
struct sigevent is not C but in POSIX
-std=c11 disables (probably an
ifdef somewhere) it.
gcc test.c
works because gcc by default enables certain level of POSIX functions and a lot GNU extensions.
Compile it with:
gcc -std=c11 -D_POSIX_C_SOURCE=200809 file.c | http://www.dlxedu.com/askdetail/3/ccb8f13f8d518258eda7a51a7b6b4a4b.html | CC-MAIN-2018-47 | refinedweb | 173 | 87.72 |
In C#, variable is storage location where a value is stored.
For example:
int x= 1;
in the above example, variable name is 'x' and variable if of "int" type and it's value is '1'
When we declare variable, it has a name and a data type. A data type determines what values can be assigned to the variable, for instance integers, strings, or boolean value.
These variables are then used in different line of codes to get desried output. Variables are usually declared with default values, like in the above example default or we can initial value of 'x' is '1'.
<data type> <name> = <value>;
Data type = integer, string, boolean or float
name = name can be anything which is relevant in the program
Let's create a sample program declaring various types of variables in it.
using System; namespace VariablesInCSharp { public class Program { public static void Main(string[] args) { //string variables string name = "Vikas"; string country= "India"; //int variables int age = 30; //float variable float salary= 50500.00f; Console.WriteLine(name); Console.WriteLine(country); Console.WriteLine(age); Console.WriteLine(salary); //changing country value here country= "United States"; Console.WriteLine(country); } } }
In the above example, we have three types of variables declared
//string variables string name = "Vikas"; string country= "India"; //int variables int age = 30; //float variable float salary= 50500.00f;
Output:
Vikas India 30 50500 United States
Notice, in the above code when printing value of "Country" second time it is changed to "United States", as we assingned it new value
//changing country value here country= "United States";
Variables at a method scope can be implicitly typed using the var keyword. The variables are always strongly types, but with var the type is inferred by C# compiler from the right side of the assignment.
It is a special type of variable, which does not need to define data type. We can declare an implicit type variable by using var keyword, the type of variable is identified at the time of compilation depending upon the value assigned to it.
take a look at the below examples:
var a = 10; // a is of type int. var str = "Hello World"; // str is of type string var f = 3.14f; // f is of type float var z; // invalid as now value is provided
Let's take a look at another example
using System; namespace ImplicitTypeSample { public class Program { public static void Main(string[] args) { var name = "John"; var age = 22; Console.WriteLine("{0} is {1} years old", name, age); name = "Martin"; age = 30; Console.WriteLine("{0} is {1} years old", name, age); Console.WriteLine(name.GetType()); Console.WriteLine(age.GetType()); } } }
In the program we have two implicitly typed variables.
var name = "John"; var age = 22;
On the left side of the assignment we use the var keyword. The name variable is of string type and the age of int. The types are inferred from the right side value of the assignment, we can check type using .GetType() method as shown below:
Console.WriteLine(name.GetType()); Console.WriteLine(age.GetType());
Output:
John is 22 years old Martin is 30 years old System.String System.Int32 | https://qawithexperts.com/tutorial/c-sharp/6/csharp-variables | CC-MAIN-2021-21 | refinedweb | 522 | 59.33 |
Change to the public schema document for the XML namespace(xml.xsd)
Discussion in 'XML' started by Henry S. Thompson, Sep 7, 2005.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
[XML Schema] Including a schema document with absent target namespace to a schema with specified tarStanimir Stamenkov, Apr 22, 2005, in forum: XML
- Replies:
- 3
- Views:
- 1,478
- Stanimir Stamenkov
- Apr 25, 2005
XML schema - Make xsd include another xsdstiank81, Jun 25, 2005, in forum: XML
- Replies:
- 4
- Views:
- 15,841
- stiank81
- Jun 26, 2005
Validation of XSD (XML Schema) against XSDRushi, Dec 6, 2005, in forum: XML
- Replies:
- 1
- Views:
- 658
- Replies:
- 1
- Views:
- 657
- George Bina
- Jul 20, 2006
Validation with XSD using XML::LibXML::Schema, and XML::Validator::Schema, Nov 28, 2006, in forum: Perl Misc
- Replies:
- 5
- Views:
- 1,386
- Brian McCauley
- Nov 29, 2006 | http://www.thecodingforums.com/threads/change-to-the-public-schema-document-for-the-xml-namespace-xml-xsd.169823/ | CC-MAIN-2015-40 | refinedweb | 178 | 64.14 |
Major VB.NET Changes
Major VB.NET Changes
VB.NET has introduced major changes to the VB language. Some are modifications to existing ways of working, whereas others are brand new. This chapter will cover some of those changes, but this is by no means an exhaustive list of all changes from VB to VB.NET. First, you'll see some of the features that have changed. Then you will see some of the new features.
General Changes
There are a number of general changes to be aware of when moving from VB to VB.NET. Among them are, the Text property, to the string. The major drawback to default properties is that they require you to have a Set command in VB. For example, take a look at the following block of VB6 code:
Dim txtBillTo as TextBox Dim txtShipTo as TextBox txtShipTo = txtBillTo
The line of code txtShipTo = txtBillTo sets the Text property of txtShipTo to the value in the Text property of txtBillTo. But what if this isn't what you wanted? What if, instead, you wanted to create an object reference in txtShipTo that referred to txtBillTo? You'd have to use this code:
Set txtShipTo = txtBillTo
As you can see, default properties require you to use the Set keyword to set references from one object variable to another.
VB.NET gets around this problem by getting rid of default properties. Therefore, to copy the Text property from txtBillTo into the Text property of txtShipTo, you'd have to use this code:
txtShipTo.Text = txtBillTo.Text
Setting the two variables equal to each other sets a reference from one to the other. In other words, you can set an object reference without the Set keyword:
txtShipTo = txtBillTo ' Object reference in VB.NET
To be more precise, default properties without parameters are no longer supported. Default properties that require parameters are still valid. Default properties with parameters are most common with collection classes, such as in ADO. In an ADO example, if you assume that rs is an ADO Recordset, check out the following code:
Rs.Fields.Item(x).Value ' OK, fully qualified Rs.Fields(x).Value ' OK, because Item is parameterized Rs.Fields(x) ' Error, because value is not parameterized
The easy solution is to fully qualify everything. This avoids any confusion about which properties are parameterized and which are not.
Subs and Functions Require Parentheses
When you used the msgbox function, you must now always use parentheses with functions, even if you are ignoring the return value. In addition, you must use parentheses when calling subs, which you did not do in VB6. For example, assume that you have this sub in both VB6 and VB.NET:
Sub foo(ByVal.
Declaration Changes
You can now initialize your variables when you declare them. You could not do this in VB6. In VB6, the only way to initialize a new variable was to do so on a separate line, like this:
Dim x As Integer x = 5
In VB.NET, you can rewrite this into one line of code:
Dim x As Integer = 5
Another significant and much-requested change is that of declaring multiple variables, and what data type they assume, on one line. For example, you might have the following line:
Dim x, y As Integer
As you're probably aware, in VB6, y would be an Integer data type, but x would be a Variant. In VB.NET, this has changed, so both x and y are Integers. If you think, "It's about time," there are many who would agree.
Support for New Assignment Operators
VB.NET now supports shortcuts for performing certain assignment operations. In VB6, you incremented x by 1 with the following line of code:
x = x + 1
In VB.NET, you can type an equivalent statement like this:
x += 1
Not only can you use the plus sign, but VB.NET now.
If you tried to type this example into VB.NET, you'd see something happen. First, of course, you'd have to add parentheses around x in your call to foo. However, when you tried to type the definition of foo, VB.NET would automatically add the word ByVal into the definition, so it would end up looking like this:
Sub foo(ByVal y As Integer)
If you wanted to pass by reference, you would have to add the ByRef keyword yourself, instead of VB.NET using the new default of ByVal.
While...Wend Becomes While...End While
The While loop is still supported, but the closing of the loop is now End While instead of Wend. If you type Wend, the editor automatically changes it to End While.
Block-Level Scope
VB.NET adds the ability to create variables that are visible only within a block. A block is any section of code that ends with one of the words End, Loop, or Next. This means that For...Next and If...End If blocks can have their own variables. Take a look at the following code:
While y < 5 Dim z As Integer ... End While
The variable z is now visible only within the While loop. It is important to realize that a procedure. You could, optionally, give them a default value. That way, if someone chose not to pass in a value for an argument, you had a value in it. If you did not set a default value and the caller did not pass in a value, the only way you had to check was to call the IsMissing statement.
IsMissing is no longer supported because VB.NET will not let you create an optional argument that does not have a default value. IsMissing is not needed because look at this block of code:
Function foo() As Integer While True Dim x As Integer x += 1 If x >= 5 Then Return x End If End While End Function
In this code, you enter what normally would be an infinite loop by saying While True. Inside the loop, you declare an integer called x and begin incrementing it. You check the value of x and, when it becomes greater than or equal to 5, you call the Return statement and return the value of x. This causes an immediate return for an unlimited number of arguments. A parameter array automatically sizes to hold the number of elements you are passing in.
A procedure can have only one ParamArray, and it must be the last argument in the definition. The array must be one-dimensional, and each element must be the same data type. However, the default data type is Object, which is what replaces the Variant data type in VB.NET.
In VB6, all the elements of a ParamArray are always passed ByRef. This cannot be changed. In VB.NET, all the elements are passed ByVal. This cannot be changed.
Properties Can Be Modified ByRef
In VB6, if a class had a property, you could pass that property to a procedure as an argument. If you passed the property ByVal, of course, it just copied the value. If you passed it ByRef, the called procedure could modify the value, but this new value was not reflected back in the object's property.
In VB.NET, however, you can pass a property to a procedure ByRef, and any changes to the property in the called procedure will be reflected in the value of the property for the object.
Take the following VB.NET example. Don't worry if the Class syntax looks strange; just realize that you are creating a class called Test with one property: Name. You instantiate the class in the Button1_Click event procedure and pass the Name property, by reference, to foo. foo modifies the variable and ends. You then use a message box to print out the Name property.Name = value End Set End Property End Class
When you run this example, the message box will display "Torrey." This shows that the property of the object is being passed by reference and that changes to the variable are reflected back in the object's property. This is new to VB.NET.
Array Changes
Arrays have undergone some changes as well. Arrays were somewhat confusing in previous versions of VB. VB.NET seeks to address any confusion by simplifying the rules and removing the ability to have nonzero lower boundaries.
Array Size
In VB6, if you left the default for arrays to start at 0, declaring an array actually gave you the upper boundary of the array, not the number of elements. For example, examine the following code:
Dim y(2) As Integer y(0) = 1 y(1) = 2 y(2) = 3
In this VB6 code, you declare that y is an array of type Integer, and the upper boundary is 2. That means.
Lower Boundary Is Always Zero
VB6 allowed you to have a nonzero lower boundary in your arrays in a couple of ways. First, you could declare an array to have a certain range. If you wanted an array to start with 1, you declared it like this:
Dim y(1 To 3) As Integer
This would create an array with three elements, indexed 13. If you didn't like this method, you could use Option Base, which allowed you to set the default lower boundary to either 0 (the default) or 1.
VB.NET removes those two options from you. You cannot use the "1 to x" syntax, and Option Base is no longer supported. In fact, because the lower boundary of the array is always 0, the Lbound function is no longer supported.
Array Assignment Is ByRef Instead of ByVal
In VB6, if you had two array variables and set one equal to the other, you actually were creating a copy of the array in a ByVal copy. Now, in VB.NET, setting one array variable to another is actually just setting a reference to the array. To copy an array, you can use the Clone method.
Data Type Changes
There are several changes to data types that are important to point out. These changes can have an impact on the performance and resource utilization of your code. The data types in VB.NET correspond to the data types in the System namespace, which is important for cross-language interoperability.
In VB6, it was easy to convert from numbers to strings and vice versa. For example, examine this block of code:
Dim x As Integer Dim y As String x = 5 y = x
In VB6, there is nothing wrong with this code. VB will take the value 5 and automatically convert it into the string "5". VB.NET, however, disallows this type of conversion by default. Instead, you would have to use the CStr function to convert a number to a string, or the Val function to convert a string to a number. You could rewrite the preceding code for VB.NET in this manner:
Dim x As Integer Dim y As String x = 5 y = CStr(x)
Fixed-Length Strings Not Supported
In VB6, you could declare a fixed-length string by using a declaration like the one shown here:
Dim y As String * 30
This declared y to be a fixed-length string that could hold 30 characters. If you try this same code in VB.NET, you will get an error. All strings in VB.NET are of variable length.
All Strings Are Unicode
If you got tired of worrying about passing strings from VB to certain API calls that accepted either ANSI or Unicode strings, you will be happy to hear that all strings in VB.NET are Unicode.
The Value of True.
The Currency Data Type Has Been Replaced
The Currency data type has been replaced by the Decimal data type. The Decimal data type is a 12-byte signed integer that can have up to 28 digits to the right of the decimal place. It supports a much higher precision than the Currency data type and has been designed for applications that cannot tolerate rounding errors. The Decimal data type has a direct equivalent in the .NET Framework, which is important for cross-language interoperability.
The Variant Data Type Has Been Replaced
The Variant data type has been replaced. Before you start thinking that there is no longer a catch-all variable type, understand that the Variant has been replaced by the Object data type. The Object data type takes up only 4 bytes because all it holds is a memory address. Therefore, even if you set the variable to an integer, the Object variable holds 4 bytes that point to the memory used to store the integer. The integer is stored on the heap. It will take up 8 bytes plus 4 bytes for the integer, so it consumes a total of 12 bytes. Examine the following code:
Dim x x = 5
In this case, x is a variable of type Object. When you set x equal to 5, VB.NET stores the value 5 in another memory location and x stores that memory address. When you attempt to access x, a lookup is required to go find that memory address and retrieve the value. Therefore, just like the Variant, the Object data type is slower than using explicit data types.
According to the VB.NET documentation, the Object data type allows you to play fast and loose with data type conversion. For example, look at the following code:
Dim x x = "5" ' x is a string x =. Actually, the old syntax still works, but there is a new error-handling structure called Try...Catch...Finally that removes the need to use the old On Error Goto structure.
The overall structure of the Try...Catch...Finally syntax is to put the code that might cause an error in the Try portion and then catch the error. Inside the Catch portion, you handle the error. The Finally portion runs code that happens after the Catch statements are done, regardless of whether there was an error. Here is a simple example:
Dim x, y As Integer ' Both will be integers Try x \= y ' cause division by zero Catch ex As Exception msgbox(ex.Message) End Try
Here, you have two variables that are both integers. You attempted to divide x by y, but because y has not been initialized, it defaults to 0. That division by zero raises an error, and you catch it in the next line. The variable ex is of type Exception, which holds the error that just occurred, so you simply print the Message property, much like you printed the Err.Description in VB6.
In fact, you can still use Err.Description and the Err object in general. The Err object will pick up any exceptions that are thrown. For example, assume that your logic dictates that an error must be raised if someone's account balance falls too low, and another.") End Try
In this case, your business logic says that if the balance drops below zero, you raise an error informing the user that the balance is below zero. If the balance drops below 10,000 but remains above zero, you notify the user to start charging interest. In this case, Err.Description picks up the description you threw in your Exception.
You can also have multiple Catch statements to catch various errors. To have one Catch statement catch a particular error, you add a When clause to that Catch. Examine this code:
Try x \= y ' cause division by zero Catch ex As Exception When Err.Number = 11 msgbox("You tried to divide by zero") Catch ex As Exception msgbox("Acts as catch-all") End Try
In this example, you are looking for an Err.Number of 11, which is the error for division by zero. Therefore, if you get a division-by-zero error, you will display a message box that says, "You tried to divide by zero." However, if a different error were to occur, you would drop into the Catch without a When clause. In effect, the Catch without a When clause acts as an "else" or "otherwise" section to handle anything that was not handled before.
Notice that the code does not fall through all the exceptions; it stops on the first one it finds. In the preceding example, you will raise an Err.Number of 11. You will see the message "You tried to divide by zero," but you will skip over the catch-all Catch. If you want to run some code at the end, regardless of which error occurred (or, indeed, if no error occurred), you could add a Finally statement. The code to do so follows:
Try x \= y ' cause division by zero Catch ex As Exception When Err.Number = 11 msgbox("You tried to divide by zero") Catch ex As Exception msgbox("Acts as catch-all") Finally msgbox("Running finally code") End Try
In this code, whether or not you hit an error, the code in the Finally section will execute..
Structures Replace UDTs
User-defined types, or UDTs, were a way to create a custom data type in VB6. You could create a new data type that contained other elements within it. For example, you could create a Customer data type using this syntax:
Private Type Customer Name As String Income As Currency End Type
You could then use that UDT in your application, with code like this:
Dim buyer As Customer buyer.Name = "Craig" buyer.Income = 20000 MsgBox buyer.Name & " " & buyer.Income
The Type statement is no longer supported in VB.NET. It has been replaced by the Structure statement. The Structure statement has some major changes, but to re-create the UDT shown earlier, the syntax is this:
Structure Customer Dim Name As String Dim Income As Decimal End Structure
Notice that the only real difference so far is that you have to Dim each variable inside the structure, which is something you did not have to do in the Type statement, even with Option Explicit turned on. Notice also that Dim is the same as Public here, meaning that the variables are visible to any instance of the structure.
Structures have many other features, however. One of the biggest differences is that structures can support methods. For example, you could add a Shipping method to the Customer structure to calculate shipping costs based on delivery zone. This code shows adding a DeliveryZone property and a DeliveryCost function:
Structure Customer Dim Name As String Dim Income As Decimal Dim DeliveryZone As Integer Function DeliveryCost() As Decimal If DeliveryZone > 3 Then Return 25 Else Return CDec(12.5) End If End Function End Structure
Here, you have a built-in function called DeliveryCost. To use this structure, your client code would look something like this:
Dim buyer As Customer buyer.Name = if you assign a structure variable to another structure variable, you get a copy of the structure and not a reference to the original structure. The following code shows that a copy is occurring because an update to seller does not update buyer:
Dim buyer As Customer Dim seller ' Object data type seller = buyer seller.Name = "Linda" msgbox(buyer.Name) msgbox(seller.Name)
Note that the preceding code will not work if you have Option Strict turned on. If you want to test this, you'll have to enter Option Strict Off.. | http://www.informit.com/articles/article.aspx?p=21414&seqNum=4 | CC-MAIN-2018-34 | refinedweb | 3,237 | 63.8 |
Visualize task graphs¶
Before executing your computation you might consider visualizing the underlying task graph. By looking at the inter-connectedness of tasks you can learn more about potential bottlenecks where parallelism may not be possile, or areas where many tasks depend on each other, which may cause a great deal of communication.
The
.visualize method and
dask.visualize function work exactly like
the
.compute method and
dask.compute function,
except that rather than computing the result,
they produce an image of the task graph.
By default the task graph is rendered from top to bottom.
In the case that you prefer to visualize it from left to right, pass
rankdir="LR" as a keyword argument to
.visualize.
import dask.array as da x = da.ones((15, 15), chunks=(5, 5)) y = x + x.T # y.compute() y.visualize(filename='transpose.svg')
Note that the
visualize function is powered by the GraphViz
system library. This library has a few considerations:
- You must install both the graphviz system library (with tools like apt-get, yum, or brew) and the graphviz Python library. If you use Conda then you need to install
python-graphviz, which will bring along the
graphvizsystem library as a dependency.
- Graphviz takes a while on graphs larger than about 100 nodes. For large computations you might have to simplify your computation a bit for the visualize method to work well. | http://docs.dask.org/en/latest/graphviz.html | CC-MAIN-2019-09 | refinedweb | 232 | 56.86 |
ISSUE-109: Datatype for phenomenonTime and resultTime
ahaller2
Datatype for phenomenonTime and resultTime
- State:
- CLOSED
- Product:
- Semantic Sensor Network Ontology
- Raised by:
- Armin Haller
- Opened on:
- 2016-12-15
- Description:
- Currently SOSA defines a xsd:dateTime datatype for resultTime, but no datatype for phenomenonTime. Should both be the same, or is there a difference, i.e. phenomenonTime could be an interval. Is there a need to align that with the Time ontology.
- Related Actions Items:
- No related actions
- Related emails:
- Re: State of SSN: arguments in favour of a single name and namespace, proposal, the SEAS example, proposal of action (from armin.haller@anu.edu.au on 2017-02-06)
- Re: SOSA time properties - was RE: Comments on the SOSA and SSN implementations (from armin.haller@anu.edu.au on 2016-12-15)
Related notes:
resolved in Haller, 10 Apr 2017, 06:36:54
Display change log | https://www.w3.org/2015/spatial/track/issues/109 | CC-MAIN-2022-33 | refinedweb | 147 | 52.9 |
.
The following example demonstrates how you can remove the middle name from a complete name.
using System; public class RemoveTest { public static void Main() { string name = "Michelle Violet Banks"; Console.WriteLine("The entire name is '{0}'", name); // remove the middle name, identified by finding the spaces in the middle of the name... int foundS1 = name.IndexOf(" "); int foundS2 = name.IndexOf(" ", foundS1 + 1); if (foundS1 != foundS2 && foundS1 >= 0) { name = name.Remove(foundS1 + 1, foundS2 - foundS1); Console.WriteLine("After removing the middle name, we are left with '{0}'", name); } } } // The example displays the following output: // The entire name is 'Michelle Violet Banks' // After removing the middle name, we are left with 'Michelle Banks'
: | https://msdn.microsoft.com/en-us/library/vstudio/d8d7z2kk | CC-MAIN-2015-11 | refinedweb | 111 | 59.9 |
This post is part of the C programming code series which helps users to understand and learn C by examples. In this article, we will look into the C program to count the number of digits in any Integer.
#include <stdio.h> int main(){ int num, count = 0; printf("Enter the number\n"); scanf("%d", &num); while(num>0){ count++; num = num/10; } printf("Total digits count = %d\n",count); }
Enter the number 46837 Total digits count = 5
Now we will break the code into parts and try to understand what is going inside it.
Two integer type variables num and count are declared. We take the input and store them inside it.
The main logic of the c program to count the number of digits in an integer goes inside the while loop. The termination condition is defined as num>0. The variable count is incremented each time the loop runs.
The next line in the loop num = num/10 is the most important part here. It means after each loop removes the last digit. The loop terminates when num becomes 0, it means that every digit is counted.
Let’s understand the example step by step when the input is 46837.
- First Iteration: Count=1, num=46837/10 which means num=4683
- Second Iteration: Count=2, num=4683/10 which means num=46
- Third Iteration: Count=3, num=468/10 which means num=46
- Fourth Iteration: Count=4, num=46/10 which means num=4
- Fifth Iteration: Count=5, num=4/10 which means num=0
- Loop Terminates now as num is 0.
You may also Like:
Check a number is negative or positive in C.
Best Operating System for C programming Language | https://holycoders.com/c-program-to-count-the-number-of-digits/ | CC-MAIN-2020-34 | refinedweb | 285 | 71.34 |
In this tutorial, we will first introduce Flask Contexts and then further look into the two contexts in Flask – Application Context and Request Context.
What is a Flask Context?
Flask uses Context to make certain variables globally accessible, on a temporary basis
In Flask, you might have noticed that the Flask Views don’t take request object as an argument and can still use them. It can be possible only if request objects are global objects, right?
Well, the answer is No.
If the request objects were to be made global, then Flask won’t distinguish between the requests that hit the server simultaneously. But that is not the case; websites does handle multiple requests at the same time. Then how is it possible??
Well, Flask uses Context to make certain variables temporarily global for a particular request object so that the Views can access those variables to return the output.
Flask context is of two types:
- Application Context
- Request Context
Application Contexts in Flask
The application context keeps track of the application-level data. Hence these store values specific to the whole application like the database connections, configurations, etc.
The Application Context exposes (ie, temporarily make them global) objects such as the current_app and a g variable.
1. current_app
current_app refers to the instance handling the requests. That is, it relates to the application Flask is running on.
2. g variable
Here g stands for Global and is used to store data like the database details etc during request handling temporarily.
Once the values for current_app and g variables are set, any View inside the application can use them.
Flask pushes(or activates) the Application Context automatically when a particular request comes in and removes it once the request is handled.
Request Context in Flask
Similar to the Application Context, The request context keeps track of the request-level data. Hence these store values that are specific to each request.
Request Context exposes objects like requests and the sessions.
1. Requests
The request object contains information about the current web request. The request context makes requests temporarily global due to which all the Views can easily access them without taking them as arguments.
Note: requests contains information only of the current request. When a new request comes in, the request object stores information about the current new request and the previous information is deleted.
2. Sessions
A session is a dictionary-like object that stores the information which persists between requests, unlike the request object. There will also be an entirely different article on Flask’s sessions soon on our website to give you better information.
Hence once the request context is made active, any View inside the application can access objects (request and sessions) exposed by it.
Like the Application Context, Flask also pushes(or activates) the request context automatically when a particular request comes in, and removes it once the request is handled.
Note: When a Request Context is pushed, it automatically creates an Application Context as well if it is not present already.
Manually Pushing Flask Context in the shell.
Flask application creates/pushes the Application and the request Contexts automatically.
Hence, inside the view functions, you can access all the objects exposed by application and request without worrying about the Contexts.
However, if you try to use the objects outside the View Function or in python shell as shown below:
from flask import Flask, request request.method
You will get an error. Same with the application context objects
from flask import Flask, current_app current_app.name
This is because the application and the request context are not active. Therefore we first have to create them.
Here, We create the application context using the app_context() method of Flask Instance
Run the code:
from flask import Flask, current_app app = Flask(__name__) appli_context = app.app_context() current_app.name
Here
- We declare a Flask object – app.
- We push/create a application context using app.app_context()
- current_app is now active and is linked to the __name__ file i.e. the __main__ file itself.
See now the error is gone! Similarly, we create the request context using the test_request_context() method of Flask Instance
from flask import Flask, request app = Flask(__name__) req = app.test_request_context() req.request
Here as well
- We declare a Flask object – app.
- We push/create a request context using app.tes_request_context()
- The request object is now active and is linked to the host website ie, the ” ” file itself.
And hence we get a proper error free output.
Conclusion
That’s it, Guys !! That was all about Contexts in Flask. You don’t need to worry much about it since Flask creates them automatically inside the application file. | https://www.askpython.com/python-modules/flask/flask-application-request-context | CC-MAIN-2021-31 | refinedweb | 777 | 56.45 |
Hey all,
I've got a project in the works that uses 3 UARTs. I manage to fit in the 3 required, but I want to add a 4th for debugging purposes if possible. This should be no problem, as I appear to have adequate UDB resources for a half-duplex UART according to the UART component datasheet:
However when I try to build, I'm running out of UDBs during placement. Looks like running out of macrocells? I'm curious as to why this is? Something to do with how macrocells get allocated maybe?
I'm using PSoC Creator 4.3, the PSoC5LP part is a CY8C5667AXI-LP040. I've attached the report file to this post. DBG_UART is the component name for the half-duplex I am trying to add. I had to put it in a .zip because apparently I can't upload .rpt or .txt files.
Appreciate any thoughts,
Thanks!
Solved! Go to Solution..
Hello KyTr.
Although KIT-059 uses a different 5LP vs your project, there was a discussion about using 4 UARTs with KIT-059. KIT-059 has 24 UDB's just like 5667. Maybe some of this information will help.
Some other suggestions:
Use USBUART.
Use software UART.
Use built-in 5LP I2C with KITprog I2C Bridge. Yeah, it's a bit tricky/limited, but I used it in a pinch for debugging when all UDB resources were used up.
Good luck with your project.
Hi BiBi,
Not a whole lot to go on in that linked thread to address the question about UDB resources, unfortunately.
If I had no other components or digital design work I'm sure I could fit 4 UARTs without much trouble. The software UART will get the job done at least for some basic info printed to my terminal. Using the debugger tool I2C bridge is an interesting idea though. I'll file that away for later, I can imagine it being useful at some point down the line.
This project is already using the USBFS component as a HID device but I'm really more interested in why I'm unable to use a UDB UART, when according to the resource meter, I should have the resources to do so.
I'm guessing it has something to do with how UDB resources are allocated? Maybe something think a UART requires "contiguous" resources that it can't place between my other USB components?
Hi,
Just like what BiBi-san has already suggested, I would suggest to consider software UART.
As you wrote that you need it for a debugging purpose,
hoping that you can put up with 9600 baud,
I tried a simple software UART TX
using CY8CKIT-059. (Hopefully it will work with your device.)
schematic
pins
main.c
#include "project.h" #include "stdio.h" #define STR_BUF_LEN 64 #define BIT_DELAY_US 104 /* 9600Hz -> 104.17us */ void soft_tx_send_byte(uint8_t data) { uint8_t mask = 0x01 ; int i ; soft_tx_Write(1) ; /* make sure that the level is high */ CyDelayUs(BIT_DELAY_US * 2) ; soft_tx_Write(0) ; /* generate start bit */ CyDelayUs(BIT_DELAY_US) ; for (i = 0; i < 8; i++ ) { if (mask & data) { soft_tx_Write(1) ; } else { soft_tx_Write(0) ; } mask <<= 1 ; CyDelayUs(BIT_DELAY_US) ; } soft_tx_Write(1) ; /* generate stop bit */ CyDelayUs(BIT_DELAY_US * 2) ; } void soft_tx_send_string(char *str) { while(str && *str) { soft_tx_send_byte(*str++) ; } } int main(void) { char str[STR_BUF_LEN+1] ; int count = 0 ; CyGlobalIntEnable; /* Enable global interrupts. */ soft_tx_send_string("\x1b[2J\x1b[;H") ; soft_tx_send_string("Test Software UART TX ") ; snprintf(str, STR_BUF_LEN, "(%s %s)\n\r", __DATE__, __TIME__) ; soft_tx_send_string(str) ; for(;;) { snprintf(str, STR_BUF_LEN, "count %d\n\r", count++) ; soft_tx_send_string(str) ; CyDelay(1000) ; } }
Then I connected a USB-serial adapter to P3_0 and PC.
The Tera Term log was something like
moto
Hi MotooTanaka,
I will probably end up using the Software Transmit UART component for just some basic debug output, but my real burning question is why can't I place a UDB UART even when I should have the resources to do so according to the resource meter in PSoC Creator. Hoping someone with some more knowledge than I have of how the UDBs work would be able to shed some light on why this might be the case.
In any event, that code for a simple software UART could come in handy elsewhere. Appreciated!
KyTr,
As Rodolfo pointed out, your design is likely runned out of routing resources (P-terms), which are used by 80% (299 out of 384). That is typically the limit of wiring network.
Try using Fixed Function UARTs (4 available), and replace some UDB-based UARTs.
/odissey1
Hi Odissey,
I don't think this part has fixed UARTs unless I am missing something? Something like an I2C, PWM, or Timer component has a FF/UDB selector in the component config. I see no such thing for the UARTs (and none are listed under "Communication" on the resource meter).
I did find my Master I2C component for this project was UDB though, where I can be using FF. Making this change took UDB usage from 76% to 66% and gave me nearly 40 Macrocells and 100 P-Terms back. This lets me fit another UART with no trouble. If I reconfigure my timers and make one of my PWM modules FF, I might be able to save even more.
Thanks for the tip on P-Terms, I'll bear that in mind here on out..
Hi Rodolfo,
Thanks for the tip, I'll mark this as solution to the question.
I did manage to recover some P-Terms and Macrocells by taking my Master I2C component from UDB to FF. Since this design was based on an older one (that had both SI2C and MI2C) I removed the unneeded Slave I2C, but neglected to switch the Master I2C to FF. It was costing me lots of UDB (to the tune of 100 P-Terms and 40 macrocells). With this change I was able to fit my extra UART.
Thanks! | https://community.cypress.com/t5/PSoC-5-3-1-MCU/PSoC5LP-Why-can-t-I-fit-this-UART-into-my-available-UDB/td-p/273511 | CC-MAIN-2021-21 | refinedweb | 981 | 72.46 |
Before I start enumerating the features of TFS 2010, I need to start with some of the big conceptual changes that we’ve made. This post set some architectural groundwork and define some terms that I’ll use in subsequent posts.
Team Project Collections.
Database Changes.
TFS Farms.
Summary
Just to reiterate – you don’t have to do any of this. You can still run TFS on a single server and not have to think about multiple ATs or multiple SQL servers. You can stick with one Team Project Collection if you like.
I believe that these new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.
Join the conversationAdd Comment
PingBack from
Wow, that sound like a big improvement. Will it be possible to move a project from one collection to another? I am thinking on the upgrade scenario: TFS2008 has no collections and with the upgrade I can imagine that 1 server becomes 1 collection. However, I would like to make use of those collections. Would that be supported?
Regards,
Thys
Team Foundation Server 2010 Key Concepts bharry just post a great article about TFS 2010 key concepts
The Accentient Blog on Microsoft (Team System) makes it to the upper-right (leaders) quadrant Brian Harry
do gripe about new version when your current and previous versions are crapy. Fix crap before make any new concepts.
Thys,
Your assumption about upgrade is exactly correct. When you upgrade a TFS 2008 server to TFS 2010, you will end up with 1 Team Project Collection. After the upgrade you will be able to break up that one collection into multiple smaller collections. However, you can never merge collections back together so I encourage people to be conservative at first until they get some experience with when they want one and when they want multiple.
Brian
Interesting Finds: April 20, 2009
This week on Channel 9, Dan and Brian discuss the week's developer news, including: – Silverlight
"or back up and restore an individual project"
This would mean 1 team project = 1 TPC. Is this a recommended scenario to be able to do project based backup/restore? (e.g. 50 team projects = 50 TPC’s on a server, just to be able to do project based backup/restore).
Since I started with the key architectural concepts, I think the most appropriate place (though perhaps
Daniel, yes, we will be recommending that people use TPCs for any boundary they want independent back up and restore. For some people this will equate to 1 TP = 1TPC. However, for many people, I expect they will have sets of very closely related projects that can live in the same TPC. In the next month or so, I’ll try to write a post on the rules around TPCs – what they can and can’t do and why you may or may not choose to create a separate one.
Brian
Is it too much to hope that there will be native 64-bit support or at a minimum WoW support in this release?
Ask and you shall receive 🙂 Read the pre-reqs section of this post:
Brian
I was using VSTS last year in one of the project in testing project. It was good but was not really helpful for testing purpose. Do we have some new concept for testing execution which can help out tester.
Yes, in 2010, we have a TON of new testing stuff coming. Check out
Brian
This is the first of a series of posts around the topic of Upgrading to Team Foundation server 2010 from
What changes will be made around the checkin policy architecture? At what level will we be able to enable policies (TPC, Project, Branch, etc)?
And is there any improvement in the policy distribution mechanism (will we still have to distribute and set registry on each client ourselves)?
Team Foundation Server 2010 is a BIG release for us by all counts. We have made conceptual changes; Brian
Team Foundation Server 2010 is a BIG release for us by all counts. We have made conceptual changes; Brian
В ближайшее время ожидается выпуск первой беты Visual Studio 2010 и TFS 2010, которые можно будет попробовать
M, Not any significant ones in the checkin policy infrastructure itself in this release. There will still be power tools that address some of the gaps you identify – like path based scoping of checkin policies. The latest release of the Power Tools also supports distribution of checkin policied to clients.
One big difference in this release is the introduction of Gated Checkin where you can do much of your validation on the build machine and side step some of these issues.
Brian
If you’ve upgraded your Team Foundation Server to Team Foundation Server 2010 Beta1 you’re probably wondering
Can the TPC be used for implementing resource quota so that no one collection can hog the resources?
Subodh,
Somewhat yes. Because a TPC is a separate SQL database, you can use all of SQL’s quota mechanisms to throttle it. We don’t really have any mechanism for throttling the application tier. Fortunately, the data tier is usually the bottleneck so if you throttle that, you should be in pretty good shape.
Brian
Last month, Brian Harry outlined some of the new features in the setup and administration experience
TFS will be providing a General Distribution Release (GDR) to enable older versions of Visual Studio
Last month (May 2009) Microsoft has released its first beta for Visual Studio Team System 2010 and Team
Last month (May 2009) Microsoft has released its first beta for Visual Studio Team System 2010 and Team
great, i think, perhaps.
But.. when I sync and see the log of what’s changed, can i right-click on a filename and view history? or do i still have to find it in the solution file 🙁
can I revert a change on a file from the ui yet?
can I drag a file from the solution into the source control window to locate it?
can I see a list of files that are not on the server, and remove them all?
can I check any files for physical differences, rather than pending files, which just shows what VS is aware of?
can I have multiple changesets to keep areas i’m working on separate?
can I associate a work item by ID without having to select a query that I know contains it? or select the query all, and wait a minute for it to come up?
does the ui still flicker as i scroll down pending changes, making it difficult to see which items are checked?
when things go wrong, can I get to a clean point with a click, rather than moving the tree and force syncing all?
But.. when I sync and see the log of what’s changed, can i right-click on a filename and view history? or do i still have to find it in the solution file 🙁
>> No you have to find it either in the solution or in source control explorer. It seems like a pretty cool idea though. I’ll add it to our suggestion list.
can I revert a change on a file from the ui yet?
>> I’m not sure what you mean be "revert". Do you mean undo pending change or rollback?
can I drag a file from the solution into the source control window to locate it?
>> No, why do you want to do this? You can do most everything from the solution explorer.
can I see a list of files that are not on the server, and remove them all?
>> Yes, use folder compare.
can I check any files for physical differences, rather than pending files, which just shows what VS is aware of?
>> This is just an issue from the command line, right? I’ve hated this behavior myself but forgot about it. I just asked that we fix it.
can I have multiple changesets to keep areas i’m working on separate?
>> Only by having more than 1 workspace or by using shelvesets.
can I associate a work item by ID without having to select a query that I know contains it? or select the query all, and wait a minute for it to come up?
>> I’m afraid not. I thought we had made Ctrl+G work for that but I just checked and it doesn’t. I’ll see what we can do.
does the ui still flicker as i scroll down pending changes, making it difficult to see which items are checked?
>> Could you be more specific? We’ve done some work to improve scrolling responsiveness but I can’t say for sure without understanding more about what you are doing and seeing.
when things go wrong, can I get to a clean point with a click, rather than moving the tree and force syncing all?
>> You should be able to leave the tree in place and right click on the root in the source control explorer and do a force get to get things back in sync. Folder diff can also be your friend here.
Brian
Hey,
To add to the list of questions/suggestions:
1. When branching the history of an item (file, folder etc.) in the new branch is being reset so that previous actions performed prior to the branching is emitted and are not shown.
Consider a developer trying to see the complete history of a file… nearly imposibile.
2. Would history pane refresh itself whenever a file is selected? In 2008 you had to select the file and right-click on it asking for HISTORY. That’s even though the history pane is displayed showing the last file which a right-click-History was performed on.
3. When adding remarks on the check-in, a simple spell checker will be appreciated.
At my company, we audit our applications annually, to make sure we are following our development procedures correctly.
One of the audits we do requires us to compare dates of source files to the date of the associated executable file. I currently do this by extracting my source files from VSS and comparing the last modified date on them to the date the executable file was created. This doesn’t work in TFS2008, because it doesn’t track this information. Audits are a big enough pain, now the source control tool I want to use appears to have been designed to make the job harder.
I had seen an entry somewhere that this was on the list of things to be addressed in TFS2010. Did it make the cut?
I work for a pretty big company (in the top 100 of the Fortune 500), and this problem will likely limit our use of TFS. I would love to see this addressed somehow.
It isn’t in 2010 but it’s high on our list to add. Have you considered comparing the contents or a hash of the contents instead? TFS keeps an MD5 hash of every file contents. You may be able to use this to do quick compares.
Brian
I hope there is a plan to simplify reverting a file to a previous version. This is needlessly and senselessly complex.
When viewing a file’s history, I should be able to simply right-click and select "Revert". That should check out the file for edit, get that specific version locally, and allow me to simply do a check-in to revert the file to its previous state.
I cannot fathom why such a fundamental feature wasn’t present in version 1.0, let alone still not present in TFS2008.
A changeset should be similarly easy to revert all at once, atomically.
No "power tools" or jumping through unintuitive hoops should be required.
In some ways yes and in some ways, I’m afraid not. Yes, we built rollback into the TFS 2010 product. It is faster, lacks some of the side effects of the Power Tools version and handles merge history correctly.
The unfortunate thing is that, for now, it is command line only. I agree with you that it should be an option in the history window but we did not get the opportunity to do that in this release. We will add it there as soon as we can.
Brian
I was really hoping I would see a shift in how you implement check-in policies. Currently, your implementation is terrible. Check-in policies should be enforced from the server only, and there should not be any need for distributing client DLLs (a total hassle), or registry modifications (utter lunacy). This really puts us into a bind, with developers (both in-house and contract) coming in and out, since we really don’t have time to deal with trying to track who needs the client, who has the client, and redeploys in the event of a small change in a policy. This is pretty disappointing.
It’s clear we need more work on checkin policies, but progress is being made. First, we shipped a capability in the Power Tools about a year ago to automatically distribute checkin policies to people’s machines.
Second in 2010, we added gated checkin that allows you to execute checkin policies on a build server to validate them before checkin and not rely on what happens on developer desktops.
I’m not saying we’re done but we’re making progress.
Brian
Hi Brian,
We are on Team Server 2005 currently, and I’ve installed the Beta 2 of 2010. So far I am really impressed.
My question is this – are there step by step instructions somewhere to do a Database migration out of our old data out of the old system and into the new one?
Another question: we only want to migrate 3 of our existing Team Projects (not all of them) into a single, new Collection. Can we selectively do that?
Thanks for all of the information you provide!
Joe
You should be able to do everything you mentioned.
Brian
I am currently running a testing environment that I would like to install TFS 2010 Beta 2. I have a 3 GB SQL Server (not much, but its all that I need for testing purposes). Here are the databases that I plan to create:
TFS_Configuration
TFS_Warehouse
TFS_ProjectCollection01
TFS_ProjectCollection02
TFS_ProjectCollection03
ReportServer
ReportServerTempDB
How should I divide up the 3 GB among these databases?
It depends a whole lot on what you plan to put in them. Why would you think about "carving up" the space? The databases will auto-grow as needed.
Brian
why do i have to check out before i check in!!!
i open my file, and edit, and i try to save, but editor says the file is readonly, i dont have options other than save as another file. This happens when TFS is introduced as our version control, and this is really the suckest thing ever!
why can u guys detect the file content is changed, like most of svn clients? is it hard for folks from microsoft!!
Hi,
Not sure if you can help me but using TFS 2010 RC I’ve found an eror that isn’t listed on google and I can’t find any reference to it. When running a build on the build server, it succeeds until it goes to drop the file on the network at which point I get a permissions error:
TF270003: Failed to copy. Ensure the source directory C:BuildsLPH1Sample Agile Project 1Standard BuildBinaries exists and that you have the appropriate permissions.
For the sake of testing, the user [originally the local Network acct but now a domain acct] running the build service is an admin on both the build server and the server hosting the share. In addition, I’m confident I’ve identified the correct user as revoking the permissions of this user to the specified folder on the build server turns the warning into a [very similar] error.
Needless to say the folders do exist (although not created by me).
Any suggestions you could provide would be appreciated I believe I’ve RTFM and can’t see any reference to this.
Thanks in advance
With regard to the "…appropriate permissions error", make sure you have granted permissions to the build user account you are using to both the folder and share. In our case, we give our domain builder account Full Permissions to both.
Hi Brian, I have been wondering where I can get more information on the TFS SDK for 2010. We recently undertook a new project of using VS2010 with Silverlight 4 and are actively using TFS 2010 for all collaboration and source control. My simple question is: Is it possible to programmatically access TFS from C# code and checkin, get latest, etc. when an application loads or as a result of a user action. I have seen information for VS2008 and a library named Microsoft.TeamFoundation (maybe) but have been unable to locate a download for it. Can you point me in the right direction, perhaps? Thanks
The TFS SDK is included in the VS SDK but hasn’t been updated to 2010 yet. Here’s a web page with some info that should get you started:
Brian
[quote]With regard to the "…appropriate permissions error", make sure you have granted permissions to the build user account you are using to both the folder and share. In our case, we give our domain builder account Full Permissions to both.
Monday, February 15, 2010 12:03 PM by kfkyle [/quote]
Thanks for the suggestion kfkyle
Unfortunately, I think the problem is something a little different… We’ve got the build server using our TFS.Service account (originally we tried the Builtin Network Account). I’ve now granted "Everyone" Full Control of both folders and I’m still getting the issue (and the same permissions on the drop share as well)
Any other suggestions?
I’m not quite sure what Brian means when he says "The TFS SDK is included in the VS SDK but hasn’t been updated to 2010 yet"
I don’t get it. Does that mean the TFS 2008 SDK is included with the VS 2010 SDK? What sense would that make? Or does he mean to say that the TFS 2010 SDK is there, but the *documentation* isn’t.
In 2008, the TFS SDK was part of the VS SDK. In the last few months, they’ve made the decision to remove much of the content from the VS SDK and instead distribute it via Code Gallery. You’ll find the current TFS 2010 SDK content here:
It hasn’t all been updated for TFS 2010 yet but we’re working on it and should have it ready by RTM.
Brian
I am interested in the high-availability of TFS 2010. Currently, we have TFS 2008 ATs clustered together so that if one server becomes unavailable, it will "fail over" to the warm standby. With TFS 2010, will a warm standby be necessary? It sounds like you can have two TFS 2010 ATs running in an Active-Active configuration instead of Active-Passive like we are doing with TFS 2008.
That’s right. You don’t really need to do the warm stand by thing any longer. We now recommend that people use a load balancer with multiple ATs and that way you get a very simple and automatic high availability solution. You can either use a dedicated load balancer, Windows NLB or IIS Application Request Routing (ARR).
Brian
Hi, thank you for your explanation, but for me there are a few questions left.
Is SQL 2005 still supported ?
And you recommend the installation on 1 single server, but that would cost us 1 extra license for SQL , so why is it preferable
Still we also have a sharepoint farm, and would like to connect to that farm, but is there somewhere a good installation and configuration guide, about installation of TFS 2010 ?
I have the Microsoft manual, but that is not very complete.
Kind Regards
Eric Dortmond
well this one is nice, but did not answer my questions about sql2005, and sharepoint connections.
SQL 2005 is supported for TFS 2005 & TFS 2008. TFS 2010 does not support SQL 2005 – it requires SQL 2008 or later.
TFS comes with an embedded license for running SQL on the TFS server for use with TFS. If you own TFS, you do not need any additional SQL license for a single server install.
The TFS 2010 installation guide () has a section on Sharepoint and covers connecting TFS to a remote Sharepoint installation. What do you feel is missing?
Brian
Thank you for your answer about the sql version, and the embedded version, so thats solves the license issue.
And for the rest, i shall look at the microsoft manual again.
Reading numerous articles in msdn and elsewhere, it looks as if it is possible to have source control be associated with multiple projects within a project collection. We have configured our source according to our namespaces, and that works well for various reasons, but we had to run all of our projects out of a single project in TFS 2008. With the collection concept I would like to have our source available across projects. Is this possible? Where can I find some information on how to set this up in TFS 2010?
This aspect hasn’t really changed in 2010. Project Collections are, in fact, an even firmer barrier than Team Projects. Source lives in a Team Project Collection. You can’t branch across collections or have a single workspace span collections. You can do both of those across Team Projects.
Brian
Hi,
We have build a custom activity in which first we are creating a directory(folder) and then copy all the .dll's in the same folder. But running the build gives me error TF270003: Failed to copy. Ensure the source directory \…***.dll exists and that you have the appropriate permissions.
When I see thedrop folder these files are there in the location specified..
any help on this would be great…
Nachiket,
Contact Jim at blogs.msdn.com/…/contact.aspx and he will help you diagnose this.
Brian
how to rename TPC
Run the TFS admin console
Select the Team Project Collections page
Select the Team Projec Collection to rename
Choose "Stop Collection"
Wait for it to stop
Choose "Edit Settings"
Update the name
Choose "Start Collection"
Brian
+1 Very good overview, explanation, examples, post! Thank you.
Hi Simon;
TF270003: Failed to copy. Ensure the source directory C:Builds1Personal FrameworkSecured BuildBinaries
Build error is "TF270003: Failed to copy. Ensure the source directory C:Builds1Personal FrameworkSecured BuildBinaries" When you face an error like above,it may be caused by you want to build two or more solution but workspace can find only one solution in directory of your build defination .You can revise your build defination’s directory from “workspace” tab,the directory must enclose all of the solutions.
For Example :
Build Solutions :
Solution 1 : $/ProjectA/Batch/x.sln,
Solution 2 : $/ProjectA/Web/y.sln
False Workspace : $/ProjectA/Batch or $ProjectA/Web
True Workspace : $/ProjectA
Serdar.
Hi Brian,
With TFS 2010 RTM, would it be possible to combine multiple TPCs into a single TPC? I help my customer who wants to migrate two set of their dual-tier TFS 2005 servers to TFS 2010 into a single TPC. Wondering if this is possible without any data lost?
Thank you!
-Alexandra
The simple answer is no. You can't combine two TPCs. You can host both TPCs on the same TFS instance but you can't squish them into a single TPC. There are solutions depending on how desperate you are and what you are wiling to give up. Some people use the TFS Integration Platform to copy data from one TPC into another but there are downsides – not all data is move (though source code and work items are), dates are changed, work item ids and changeset numbers are different, etc.
Brian
Thank you very much for getting back to me.
-Alex
I would like to appreciate the work of blog author that the person provided us with an extremely excellent information regarding the topic. Ireally learned something from this blog and started to contribute my ideas via commenting on this blog. Keep it up
This change has jacked up my WIT workflows and other areas of my process foundation. How is showing the collection name relevant in the Assigned To field? It is not a work around to include existing values as new work items show up with this nonsensical pre-fix. Provide me with the ability to suppress the label in this area. I want to choose Quality Assurance Director, not [Default Collection]Quality Assurance Director.
I appreciate that the group scope is not always the most useful thing. The problem is that we support groups at multiple scopes – Team Project, Team Project Collection and instance. And of course, we support both groups and user names. We put the group scope on there to be clear which one we mean. I think, ideally, we'd only put it on there if there were ambiguity.
That said, this isn't new behavior. I'm pretty sure it's been that way since TFS 2005 so I'm not sure why would would have suddenly broken something.
Brian
Is it possible to set the initial changeset number in TFS? If so how is this done?
No, I'm afraid not. The initial changeset number always starts at 1 and increments.
Brian
Hi, is it possible to get "warning error" if i am trying to check in older version of the files?
I'm not 100% sure I understand your question. In TFS, if you try to checkin a change based on an older version of the file, it will warn you that you have to merge your changes with the more recent changes and give you a number of options for doing that.
Brian
Thanks, Brian, you answered my question!
Albina
Can someone provide steps for backing up one Project collection and restoring it to a new Project collection? I am sure it is not as simple as backing up Tfs_DefaultCollection database and restoring it to Tfs_NewCollection database….is it?
@Chris – You'll need to follow the steps in 'Move a Team Project Collection'
msdn.microsoft.com/…/dd936138.aspx
You have to 'Detach' the collection from one server, backup the database, restore it to the new server, then 'Attach' it to the new TFS server. The reason for this, is that the Tfs_Configuration database contains common information (like users, permissions, etc) and the 'detach' process will make a copy of this information into the collection's database.
Hi Brian,
I have 21 TPCs on one TFS 2010 server and all TPCs shares a single SQL 2008 report server. My QA team request to create a new TPCs to store all their test automation scripts for 6 team projects that exists across multiple TPCs, but they want have the same team project name as those team projects that is already existed in other TPCs. For example, TPC_A consists of Proj_1 and Proj_2; TPC_B consists of Proj_4 and Proj_5; and TPC_C consists Proj_1, Proj_4 and Proj_5.
Do you see any issue of having the same team project name across multiple TPCs?
Please advise.
Thanks,
-Alex
There's no problem having projects with the same name as long as they are in separate team project collections.
Brian
w | https://blogs.msdn.microsoft.com/bharry/2009/04/19/team-foundation-server-2010-key-concepts/ | CC-MAIN-2019-04 | refinedweb | 4,634 | 71.04 |
Inspiration
I'm very excited about this blog, as it took me quite a lot of effort and scavenging through the internet to completely grasp the concept of this technique(That's probably because I have almost zero knowledge in Linear Algebra or I'm just plain dumb). So I feel like I genuinely conquered a challenge, and I really want to share it with someone. But there's no way my CP friends circle will believe it, they'll think I'm just trying to show off :P
So here I am, sharing it on CF. I also created a personal blog, so that if I ever feel like sharing something again(not only about CP), I can write a blog there. I also added this same post there, you can read it there if you prefer dark theme. I'll be pleased to hear any thoughts on the blog or if I can improve it in some way ^_^
Introduction
Since it concerns Linear Algebra, there needs to be a lot of formal stuff going on in the background. But, I'm too much inconfident on this subject to dare go much deep. So, whenever possible, I'll try to explain everything in intuitive and plain English words. Also, this blog might take a while to be read through completely, as there are quite a few observations to grasp, and the example problems aren't that easy either. So please be patient and try to go through it all, in several sits if needed. I believe it'll be worth the time. In any case, I've written the solutions, codes, and provided links to their editorials(if available). I'll provide more details in the solutions tomorrow and put more comments in the codes, since I'm really tired from writing this blog all day.
Now, the problems that can be solved using this technique are actually not much hard to identify. The most common scenario involves: you'll be given an array of numbers, and then the problem asks for an answer by considering all the xor-sums of the numbers in all possible subsets of the array. This technique can also be used in some online-query problems: the problem can provide queries of first type instructing you to insert numbers in the array(_without removal_, I don't know how to solve with deletion of elements) and in-between those queries, asking for answers in separate queries of second type.
The whole technique can be divided into two main parts, some problems can even be solved by using only the first part(Don't worry if you don't understand them completely now, I explain them in details right below): 1. Represent each given number in it's binary form and consider it as a vector in the $$$\mathbb{Z}_2^d$$$ vector space, where $$$d$$$ is the maximum possible number of bits. Then, xor of some of these numbers is equivalent to addition of the corresponding vectors in the vector space. 2. Somehow, relate the answer to the queries of second type with the basis of the vectors found in Part 1.
PS: Does anyone know any name for this technique? I'm feeling awkward referring to it as 'technique' this many times :P If it's not named yet, how about we name it something?
Part 1: Relating XOR with Vector Addition in $$$\mathbb{Z}_2^d$$$
Let me explain the idea in plain English first, then we'll see what the $$$\mathbb{Z}_2^d$$$ and vector space means. I'm sure most of you have already made this observation by yourselves at some point.
Suppose, we're xor-ing the two numbers $$$2$$$ and $$$3.$$$ Let's do it below:
Now, for each corresponding pair of bits in the two numbers, compare the result of their xor with the result of their sum taken modulo $$$2$$$:
Notice the similarity between columns $$$4$$$ and $$$6$$$? So, we can see that taking xor between two numbers is essentially the same as, for each bit positions separately, taking the sum of the two corresponding bits in the two numbers modulo $$$2.$$$
Now, consider a cartesian plane with integer coordinates, where the coordinate values can only be $$$0$$$ or $$$1.$$$ If any of the coordinates, exceeds $$$1,$$$ or goes below $$$0,$$$ we simply take it's value modulo $$$2.$$$
This way, there can only be $$$4$$$ points in this plane: $$$(0, 0), (0, 1), (1, 0), (1, 1).$$$ Writing any other pair of coordinates will refer to one of them in the end, for example, point $$$(3, 2)$$$ is the same point as point $$$(1, 0)$$$ since $$$3 \equiv 1$$$ and $$$2 \equiv 0$$$ modulo $$$2.$$$
In view of this plane, we can represent the number $$$2 = (10)_2$$$ as the point $$$(0, 1),$$$ by setting the first bit of $$$2$$$ as the $$$x$$$ coordinate and the second bit as the $$$y$$$ coordinate in our plane. Refer to this point as $$$P(0, 1).$$$ Then, the position vector of $$$2$$$ will be $$$\overrightarrow{OP}$$$ where $$$O(0, 0)$$$ is the origin. Similarly, the position vector of $$$3$$$ will be $$$\overrightarrow{OQ}$$$ where $$$Q = (1, 1).$$$
An interesting thing happens here, if we add the two position vectors, the corresponding coordinates get added modulo $$$2,$$$ which actually gives us the position vector of the xor of these two position vectors. For example, adding vectors $$$\overrightarrow{OP}$$$ and $$$\overrightarrow{OQ}$$$ we get $$$\overrightarrow{OR}$$$ where $$$R(1, 0)$$$ turns out to be the point corresponding the xor of $$$2$$$ and $$$3.$$$
This is all there is to it. Transforming xor operations to bitwise addition modulo $$$2$$$ and, in some cases, vector addition in this way can be helpful in some problems. Let's see one such problem. Before that, let me explain in short what vector space and $$$\mathbb{Z}_2^b$$$ meant earlier. I apologize to any Linear Algebra fans, since I don't want to write formal definitions here to make things look harder than it is. I'll explain the idea of these terms the way I find them in my mind, kindly pardon me for any mistakes and correct me if I'm wrong.
$$$\underline{\text{Vector Space}}$$$: Just a collection of vectors.
$$$\underline{\mathbb{Z_2}}$$$: $$$\mathbb{Z_m}$$$ is the set of remainders upon division by $$$m.$$$ So, $$$\mathbb{Z_2}$$$ is simply the set $$$\{0, 1\},$$$ since these are the only remainders possible when taken modulo $$$2.$$$
$$$\underline{\mathbb{Z_2^d}}$$$: A $$$d-$$$dimensional vector space consisting of all the different position vectors that consists of $$$d$$$ coordinates, all coordinates being elements of $$$\mathbb{Z_2}.$$$ For example, earlier our custom cartesian plane was a two-dimensional one. So, it was $$$\mathbb{Z_2^2}.$$$ $$$\mathbb{Z_2^3}$$$ would be a small $$$3d-$$$plane with only $$$2^3 = 8$$$ points, all coordinates taken modulo $$$2.$$$
So, what we've seen is that the xor-operation is equivalent to vector addition in a $$$\mathbb{Z}_2^d$$$ vector space. See how unnecessarily intimidating this simple idea sounds when written in formal math!
Anyways, the problem:
Problem 1 (Division 2 — C)
Find the number of non-empty subsets, modulo $$$10^9 + 7,$$$ of a given set of size $$$1 \le n \le 10^5$$$ with range of elements $$$1 \le a_i \le 70,$$$ such that the product of it's elements is a square number.
Link to the source
If you'd like to solve the problem first, then kindly pause and try it before reading on further.
Solution
It's obvious that our solution will build on the constraint on $$$a_i,$$$ which is just $$$70.$$$
For a number to be square, each of it's prime divisors must have an even exponent in the prime factorization of the number. There are only $$$19$$$ primes upto $$$70.$$$ So, we can assign a mask of $$$19$$$ bits to each array element, denoting if the $$$i$$$'th prime occurs odd or even number of times in it by the $$$i$$$'th bit of the mask.
So, the problem just boils down to finding out the number of non-empty subsets of this array for which the xor-sum of it's elements' masks will be $$$0.$$$
I got stuck here for quite a while ;-; We can try to use dynamic programming, $$$\text{dp[at][msk]}$$$ states the number of subsets in $$$\{a_1, a_2, \ldots, a_{\text{at}}\}$$$ such that the xor-sum of it's elements' masks is $$$\text{msk}.$$$ Then,
with the initial value $$$\text{dp[0][0] = 1}.$$$ The final answer would be, $$$\text{dp[n][0]}.$$$
But, the complexity is $$$O(n \cdot 2^{19}),$$$ which is way too high :(
The thing to notice here, is that, even if $$$n \le 10^5,$$$ the actual number of different possible $$$a_i$$$ is just $$$70.$$$ So, if we find the dp for these $$$70$$$ different masks, and if for each $$$1 \le \text{at} \le 70$$$ know the number of ways to select odd/even number of array elements with value $$$\text{at},$$$ then we can easily count the answer with the following dp:
where, $$$\text{poss[at][0]}$$$ is the number of ways to select even number of array elements with value $$$\text{at},$$$ and similarly $$$\text{poss[at][1]}$$$ for odd number of elements.
Reference Code
#include <bits/stdc++.h> using namespace std; const int N = 1e5 + 10; const int MAX_A = 70; const int TOTAL_PRIMES = 19; const int MOD = 1e9 + 7; int n; int poss[MAX_A + 1][2]; const int primes[] = {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67}; int mask[MAX_A + 1]; int dp[MAX_A + 1][1 << TOTAL_PRIMES]; int main() { cin >> n; for (int i = 1; i <= MAX_A; i++) poss[i][0] = 1; for (int i = 1; i <= n; i++) { int a; scanf("%d", &a); int tmp = poss[a][0]; poss[a][0] = (poss[a][0] + poss[a][1]) % MOD; poss[a][1] = (poss[a][1] + tmp) % MOD; } for (int i = 1; i <= MAX_A; i++) { for (int p = 0; p < TOTAL_PRIMES; p++) { int cnt = 0; int at = i; while (at % primes[p] == 0) { at /= primes[p]; cnt++; } if (cnt & 1) mask[i] |= (1 << p); } } int max_mask = 1 << TOTAL_PRIMES; dp[0][0] = 1; for (int at = 1; at <= MAX_A; at++) for (int msk = 0; msk < max_mask; msk++) { dp[at][msk] = dp[at - 1][msk] * 1LL * poss[at][0] % MOD; dp[at][msk] += dp[at - 1][msk ^ mask[at]] * 1LL * poss[at][1] % MOD; dp[at][msk] %= MOD; } cout << (dp[MAX_A][0] + MOD - 1) % MOD << endl; return 0; }
Since the number of different possible masks were just $$$70$$$ in the previous problem, we had been able to use dynamic programming for checking all possible xors. But what if the constraint was much bigger, say $$$10^5.$$$ That is when we can use Part $$$2$$$ of this technique, which, in some cases, works even when the queries are online.
Part 2: Bringing in Vector Basis
We need a couple of definitions now to move forward. All the vectors mentioned in what follows, exclude null vectors. I sincerely apologize for being so informal with these definitions.
$$$\underline{\text{Independent Vectors:}}$$$ A set of vectors $$$\vec{v_1}, \vec{v_2}, \ldots, \vec{v_n}$$$ is called independent, if none of them can be written as the sum of a linear combination of the rest.
$$$\underline{\text{Basis of a Vector Space:}}$$$ A set of vectors is called a basis of a vector space, if all of the element vectors of that space can be written uniquely as the sum of a linear combination of elements of that basis.
A few important properties of independent vectors and vector basis that we will need later on(I find these pretty intuitive, so I didn't bother with reading any formal proofs. Let me know in the comments if you need any help):
For a set of independent vectors, we can change any of these vectors by adding to it any linear combination of all of them, and the vectors will still stay independent. What's more fascinating is that, the set of vectors in the space representable by some linear combination of this independent set stays exactly the same after the change.
Notice that, in case of $$$\mathbb{Z}_2^d$$$ vector space, the coefficients in the linear combination of vectors must also lie in $$$\mathbb{Z}_2.$$$ Which means that, an element vector can either stay or not stay in a linear combination, there's no in-between.
The basis is actually the smallest sized set such that all other vectors in the vector space are representable by a linear combination of just the element vectors of that set.
The basis vectors are independent.
For any set with smaller number of independent vectors than the basis, not all of the vectors in the space will be representable.
And there cannot possibly be larger number of independent vectors than basis in a set. If $$$d$$$ is the size of the basis of a vector space, then the moment you have $$$d$$$ independent vectors in a set, it becomes a basis. You cannot add another vector into it, since that new vector is actually representable using the basis.
For a $$$d-$$$dimensional vector space, it's basis can have at most $$$d$$$ vector elements.
With just these few properties, we can experience some awesome solutions to a few hard problems. But first, we need to see how we can efficiently find the basis of a vector space of $$$n$$$ vectors, where each vector is an element of $$$\mathbb{Z}_2^d.$$$ The algorithm is quite awesome <3 And it works in $$$O(n \cdot d).$$$
The Algorithm:
This algorithm extensively uses properties $$$1, 2, 3$$$ and $$$4,$$$ and also the rest in the background. All the vectors here belong to $$$\mathbb{Z}_2^d,$$$ so they are representable by a bitmask of length $$$d.$$$
Suppose at each step, we're taking an input vector $$$\vec{v_i}$$$ and we already have a basis of the previously taken vectors $$$\vec{v_1}, \vec{v_2}, \ldots, \vec{v_{i - 1}},$$$ and now we need to update the basis such that it can also represent the new vector $$$\vec{v_i}.$$$
In order to do that, we first need to check whether $$$\vec{v_i}$$$ is representable using our current basis or not.
If it is, then this basis is still enough and we don't need to do anything. But if it's not, then we just add this vector $$$vec{v_i}$$$ to the set of basis.
So the only difficuly that remains is, to efficiently check whether the new vector is representable by the basis or not. In order to facilitate this purpose, we use property $$$1$$$ to slightly modify any new vectors before inserting it in the basis, being careful not to break down the basis. This way, we can have more control over the form of our basis vectors. So here's the plan:
Let, $$$f(\vec{v})$$$ be the first position in the vector's binary representation, where the bit is set. We make sure that all the basis vectors each have a different $$$f$$$ value.
Here's how we do it. Initially, there are no vectors in the basis, so we're fine, there are no $$$f$$$ values to collide with each other. Now, suppose we're at the $$$i$$$'th step, and we're checking if vector $$$\vec{v_i}$$$ is representable by the basis or not. Since, all of our basis have a different $$$f$$$ value, take the one with the least $$$f$$$ value among them, let's call this basis vector $$$\vec{b_1}.$$$
If $$$f(\vec{v_i}) < f(\vec{b_1})$$$ then no matter how we take the linear combination, by property $$$2,$$$ no linear combination of the basis vectors' can have $$$1$$$ at position $$$f(\vec{v_i}).$$$ So, $$$\vec{v_i}$$$ will be a new basis vector, and since it's $$$f$$$ value is already different from the rest of the basis vectors, we can insert it into the set as it is and keep a record of it's $$$f$$$ value.
But, if $$$f(\vec{v_i}) == f(\vec{b_1}),$$$ then we must subtract $$$\vec{b_1}$$$ from $$$\vec{v_i}$$$ if we want to represent $$$\vec{v_i}$$$ as a linear combination of the basis vectors, since no other basis vector has bit $$$1$$$ at position $$$f(\vec{v_i}) = f(\vec{b_1}).$$$ So, we subtract $$$\vec{b_1}$$$ from $$$\vec{v_i}$$$ and move on to $$$\vec{b_2}.$$$
Note that, by changing the value of $$$\vec{v_i}$$$ we're not causing any problem according to property $$$1.$$$ $$$\vec{v_i}$$$ and $$$\vec{v_i} - \vec{b_1}$$$ is of same use to us. If in some later step we find out $$$\vec{v_i}$$$ is actually not representable by the current basis, we can still just insert it's changed value in the basis, since the set of vectors in the space representable by this new basis would've been the same if we inserted the original $$$\vec{v_i}$$$ instead.
If, after iterating through all the basis vector $$$\vec{b}$$$'s and subtracting them from $$$\vec{v_i}$$$ if needed, we still find out that $$$\vec{v_i}$$$ is not null vector, it means that the new changed $$$\vec{v_i}$$$ has a larger value of $$$f$$$ than all other basis vectors. So we have to insert it into the basis and keep a record of it's $$$f$$$ value.
Here's the implementation, the vectors being represented by bitmasks of length $$$d$$$:
int basis[d]; // basis[i] keeps the mask of the vector whose f value is i int sz; // Current size of the basis void insertVector(int mask) { for (int i = 0; i < d; i++) { if ((mask & 1 << i) == 0) continue; // continue if i != f(mask) if (!basis[i]) { // If there is no basis vector with the i'th bit set, then insert this vector into the basis basis[i] = mask; ++sz; return; } mask ^= basis[i]; // Otherwise subtract the basis vector from this vector } }
Let's view some problems now:
Problem 2a
Given a set $$$S$$$ of size $$$1 \le n \le 10^5$$$ with elements $$$0 \le a_i \lt 2^{20}.$$$ Find the number of distinct integers that can be represented using xor over the set of the given elements.
Link to the source
Solution
Think of each element as a vector of dimension $$$d = 20.$$$ Then the vector space is $$$\mathbb{Z}_2^{20}.$$$ We can find it's basis in $$$O(d \cdot n).$$$ For any linear combination of the basis vectors, we get a different possible xor of some subset. So, the answer would be $$$2^\text{size of basis}.$$$ It would fit in an integer type, since size of basis $$$\le d = 20$$$ by property $$$7.$$$
Reference Code
#include <bits/stdc++.h> using namespace std; const int N = 1e5 + 10, LOG_A = 20; int basis[LOG_A]; int sz; void insertVector(int mask) { for (int i = 0; i < LOG_A; i++) { if ((mask & 1 << i) == 0) continue; if (!basis[i]) { basis[i] = mask; ++sz; return; } mask ^= basis[i]; } } int main() { int n; cin >> n; while (n--) { int a; scanf("%d", &a); insertVector(a); } cout << (1 << sz) << endl; return 0; }
Problem 2b
We have a graph of $$$2^{k}$$$ nodes numbered from $$$0$$$ to $$$2^{k} - 1,$$$ $$$1 \le k \le 30.$$$ Also, we're given $$$1 \le M \le 10^5$$$ integers $$$x_1, x_2, \ldots, x_M$$$ within the range $$$0 \le x_i \le 2^{k} - 1.$$$ In the graph, two vertices $$$u$$$ and $$$v$$$ are connected with an edge iff $$$u \oplus v = x_i$$$ for some $$$i.$$$ Find the number of connected components in the graph.
Link to the source
Link to the editorial and reference code
Problem 3
Given a set $$$S$$$ of size $$$1 \le n \le 10^5$$$ with elements $$$0 \le a_i \lt 2^{20}.$$$ What is the maximum possible xor of the elements of some subset of $$$S?$$$
Link to the source
Solution
In this problem, we need to slightly alter the definition of $$$f(\vec{b}).$$$ Instead of $$$f$$$ being the first position with a set bit, let it be the last position with a set bit.
Now, to get the maximum, we initialize our
answer at 0 and we start iterating the basis vectors starting with the one that has the highest value of $$$f.$$$
Suppose, we're at basis vector $$$\vec{b}$$$ and we find that
answer doesn't have the $$$f(\vec{b})$$$'th bit set, then we add $$$\vec{b}$$$ with
answer. This greedy solution works because $$$f(\vec{b})$$$ is the most significant bit at the moment, and we must set it; doesn't matter if all the following bits turn to $$$0.$$$
Reference Code
#include <bits/stdc++.h> using namespace std; const int N = 1e5 + 10, LOG_A = 20; int basis[LOG_A]; void insertVector(int mask) { for (int i = LOG_A - 1; i >= 0; i--) { if ((mask & 1 << i) == 0) continue; if (!basis[i]) { basis[i] = mask; return; } mask ^= basis[i]; } } int main() { int n; cin >> n; while (n--) { int a; scanf("%d", &a); insertVector(a); } int ans = 0; for (int i = LOG_A - 1; i >= 0; i--) { if (!basis[i]) continue; if (ans & 1 << i) continue; ans ^= basis[i]; } cout << ans << endl; return 0; }
Problem 4 (1st Hunger Games — S)
We have an empty set $$$S$$$ and we are to do $$$1 \le n \le 10^6$$$ queries on it. Let, $$$X$$$ denote the set of all possible xor-sums of elements from a subset of $$$S.$$$ There are two types of queries.
Type $$$1$$$: Insert an element $$$1 \le k \le 10^9$$$ to the set(If it's already in the set, do nothing)
Type $$$2$$$: Given $$$k,$$$ print the $$$k$$$'th hightest number from $$$X.$$$ It's guaranteed that $$$k \le \mid X \mid.$$$ Link to the source
Solution
A bit like the previous one. For query type $$$2,$$$ again we'll iterate through the basis vectors according to their decreading order of $$$f$$$ values.
Suppose $$$\vec{b_h}$$$ is the one with the hightest $$$f$$$ value. Initially we know there are $$$2^\text{basis size}$$$ elements in $$$X.$$$ So, if $$$k <= \frac{2^\text{basis size}}{2},$$$ we set the $$$f(\vec{b_h})$$$'th bit of
answer to $$$0.$$$ Otherwise we set it to $$$1$$$ and subtract $$$\frac{2^\text{basis size}}{2}$$$ from $$$k.$$$ Then we move on to the next basis vector and continue. In the end $$$k$$$ will be $$$1$$$ and we'll get our answer by setting $$$0$$$ in
answer for all $$$f(\vec{b_i})$$$'th bits from that point forward.
Reference Code
#include <bits/stdc++.h> using namespace std; const int N = 1e6 + 10, LOG_K = 30; int basis[LOG_K], sz; void insertVector(int mask) { for (int i = LOG_K - 1; i >= 0; i--) { if ((mask & 1 << i) == 0) continue; if (!basis[i]) { basis[i] = mask; sz++; return; } mask ^= basis[i]; } } int query(int k) { int mask = 0; int tot = 1 << sz; for (int i = LOG_K - 1; i >= 0; i--) if (basis[i]) { int low = tot / 2; if ((low < k && (mask & 1 << i) == 0) || (low >= k && (mask & 1 << i) > 0)) mask ^= basis[i]; if (low < k) k -= low; tot /= 2; } return mask; } int main() { int n; cin >> n; while (n--) { int t, k; scanf("%d %d", &t, &k); if (t == 1) insertVector(k); else printf("%d\n", query(k)); } return 0; }
Problem 5 (Division 2 — F)
You're given an array $$$0 \le a_i \lt 2^{20}$$$ of length $$$1 \le n \le 10^5.$$$ You have to answer $$$1 \le q \le 10^5$$$ queries.
In each query you'll be given two integers $$$1 \le l \le n$$$ and $$$0 \le x \lt 2^{20}.$$$ Find the number of subsequences of the first $$$l$$$ elements of this array, modulo $$$10^9 + 7,$$$ such that their bitwise-xor sum is $$$x.$$$
Link to the source
Solution
We can answer the queries online. Iterate through the prefix of the array, and for each prefix remember the basis vectors of that prefix.
Then, iterate through the queries. To answer a query, we check if $$$x$$$ is actually representable by the prefix of $$$l$$$ elements or not, with slight modification to the
insertVector function(We don't need to add $$$x,$$$ just check if it's representable or not).
If it's not representable, then the answer to the query is $$$0.$$$ If it is representable, then the answer will be $$$2^(l - b),$$$ where $$$b$$$ is the basis size for the first $$$l$$$ elements. It is so, because for each subset of the $$$(l - b)$$$ non-basis vectors in the prefix, we find a unique linear combination to yield xor-sum $$$x.$$$
Reference Code
#include <bits/stdc++.h> using namespace std; typedef pair<int, int> ii; #define x first #define y second const int N = 1e5 + 10; const int LOG_A = 20; const int MOD = 1e9 + 7; int n; int a[N]; int q; ii q_data[N]; vector<int> q_at[N]; int powers[N]; int ans[N]; int base[LOG_A], sz; bool checkXor(int mask) { for (int i = 0; i < LOG_A; i++) { if ((mask & 1 << i) == 0) continue; if (!base[i]) return false; mask ^= base[i]; } return true; } void insertVector(int mask) { for (int i = 0; i < LOG_A; i++) { if ((mask & 1 << i) == 0) continue; if (!base[i]) { base[i] = mask; sz++; return; } mask ^= base[i]; } } int main() { cin >> n >> q; for (int i = 1; i <= n; i++) scanf("%d", &a[i]); for (int i = 1; i <= q; i++) { scanf("%d %d", &q_data[i].x, &q_data[i].y); q_at[q_data[i].x].push_back(i); } powers[0] = 1; for (int i = 1; i < N; i++) powers[i] = powers[i - 1] * 2LL % MOD; for (int at = 1; at <= n; at++) { insertVector(a[at]); for (int at_q : q_at[at]) if (checkXor(q_data[at_q].y)) { ans[at_q] = powers[at - sz]; } } for (int i = 1; i <= q; i++) printf("%d\n", ans[i]); return 0; }
Problem 6 (Education Round — G)
You are given an array $$$0 \le a_i \le 10^9$$$ of $$$1 \le n \le 2 \cdot 10^5$$$ integers. You have to find the maximum number of segments this array can be partitioned into, such that -
1. Each element is contained in exactly one segment
2. Each segment contains at least one element
3. There doesn't exist a non-empty subset of segments such that bitwise-xor of the numbers from them is equal to $$$0$$$
Print $$$-1$$$ if no suitable partition exists.
Link to the source
Solution
Notice that, saying all subsets of a set yeild non-zero xor is equivalent to saying all subsets of that set yeild different xor-sum. The the xor-sums of segments in the answer partition need to be independent vectors. This is the first of the two main observations.
The second one is that, suppose we picked some segments $$$[l_1 = 1, r_1], [l_2 = r_1 + 1, r_2], \ldots, [l_k = r_{k - 1} + 1, r_k].$$$ Let, $$$p_i$$$ be the xor of the xor-sums of the first $$$i$$$ segments. Then, observe that, every possible xor of the numbers from some non-empty subset of these segments can also be obtained by xor-ing some subset from the set $$$\{p_1, p_2, ldots, p_k\}$$$ and vice versa. Which means that the set of xor-sums of these segments and the set of prefix xors of these segments produces the exact same set of vectors in $$$\mathbb{Z}_2^{31}.$$$ So, if the xor-sums of these segments has to be independent, then so does the prefix xors of these segments. Thus, the answer simply equals the basis size of the n prefix xors of the array. The only exception when the answer equals $$$-1$$$ happens, when the xor-sum of all the elements in the array is $$$0.$$$
I'll write this solution in more detail tomorrow. I'm half asleep right now.
Reference Code
#include <bits/stdc++.h> using namespace std; const int N = 2e5 + 10, LOG_PREF = 31; int n; int basis[LOG_PREF]; void insertVector(int mask) { for (int i = 0; i < LOG_PREF; i++) { if ((mask & 1 << i) == 0) continue; if (!basis[i]) { basis[i] = mask; return; } mask ^= basis[i]; } } int main() { cin >> n; int pref = 0; for (int i = 1; i <= n; i++) { int a; scanf("%d", &a); pref ^= a; insertVector(pref); } if (pref == 0) { cout << -1 << endl; return 0; } int ans = 0; for (int i = 0; i < LOG_PREF; i++) { ans += (basis[i] > 0); } cout << ans << endl; return 0; }
Conclusion
This is my first take on writing tutorial blogs on CF. I hope it'll be of use to the community.
I apologize for my terrible Linear Algebra knowledge. I would write this blog without using any of it if I could. I don't want to spread any misinformation. So please let me know in comments if you find any mistakes/wrong usage of notations.
I plan to write on Hungarian Algorithm next. There's just so many prerequisites to this algorithm. It'll be an enjoyable challenge to write about. I'd be glad if you can provide me some resource links in the comments to learn it from, though I already have quite a few. | https://codeforces.com/blog/entry/68953 | CC-MAIN-2020-45 | refinedweb | 4,876 | 65.56 |
Qt
zh-CN:Qt.
Tools
The following are official Qt tools:
- Qt Creator — A cross-platform IDE tailored for Qt that supports all of its features.
- Qt Linguist — A set of tools that speed the translation and internationalization of Qt applications.
- Qt Assistant — A configurable and redistributable documentation reader for Qt qch files.
- Qt Designer — A powerful cross-platform GUI layout and forms builder for Qt widgets.
- Qt Quick Designer — A visual editor for QML files which supports WYSIWYG. It allows you to rapidly design and build Qt Quick applications and components from scratch.
- QML Viewer — A tool for loading QML documents that makes it easy to quickly develop and debug QML applications.
- qmake — A tool that helps simplify the build process for development project across different platforms, similar to cmake, but with fewer options and tailored for Qt applications.
- uic — A tool that reads *.ui XML files and generates the corresponding C++ files.
- rcc — A tool that is used to embed resources (such as pictures) into a Qt application during the build process. It works by generating a C++ source file containing data specified in a Qt resource (.qrc) file.
- moc — A tool that handles Qt's C++ extensions (the signals and slots mechanism, the run-time type information, and the dynamic property system, etc.).
Bindings
Qt has bindings for all of the more popular languages, for a full list see this list.
The following examples display a small 'Hello world!' message in a window.
C++
- Package::
- python-pyqt4 - Python 3.x bindings
- python2-pyqt4 - Python 2.x bindings
-_())
- Package:
- python-pysideAUR - Python 3.x bindings
- python2-pysideAUR -#
- Package: kdebindings-qyoto
- Website:
- Build with:
mcs -pkg:qyoto hello.cs
- Run with:
mono hello.exe
hello.cs
using System; using Qyoto; public class Hello { public static int Main(String[] args) { new QApplication(args); new QLabel("Hello world!").Show(); return QApplication.Exec(); } }
Ruby
- Package: kdebindings-qtruby
-() | https://wiki.archlinux.org/index.php?title=Qt&oldid=277311 | CC-MAIN-2017-13 | refinedweb | 316 | 59.09 |
PerfectTemplate the September 5th Swift toolchain snapshot. **
Current version: DEVELOPMENT-SNAPSHOT-2016-09-05-a
We focus exclusively on the latest and most stable version of Swift to maximize developers’ productivity. Until the release of Swift 3.0 (expected in September 2016), please treat this version of Perfect for R&D purposes only.
Issues
Building & Running
The following will clone and build an empty starter project and launch the server on port 8181.
git clone cd PerfectTemplate swift build .build/debug/PerfectTemplate
You should see the following output:
Starting HTTP server on 0.0.0.0:8181 with document root ./webroot
This means the server is running and waiting for connections. Access to see the greeting. Hit control-c to terminate the server.
Starter Content
The template file contains a very simple "hello, world!" example.
import PerfectLib import PerfectHTTP import PerfectHTTPServer // Create HTTP server. let server = HTTPServer() // Register your own routes and handlers var routes = Routes() routes.add(method: .get, uri: "/", handler: { request, response in response.appendBody(string: "<html><title>Hello, world!</title><body>Hello, world!</body></html>") response.completed() } ) // Add the routes to the server. server.addRoutes(routes) // Set a listen port of 8181 server.serverPort = 8181 // Set a document root. // This is optional. If you do not want to serve static content then do not set this. // Setting the document root will automatically add a static file handler for the route /** server.documentRoot = "./webroot" // Gather command line options and further configure the server. // Run the server with --help to see the list of supported arguments. // Command line arguments will supplant any of the values set above. configureServer(server) do { // Launch the HTTP server. try server.start() } catch PerfectError.networkError(let err, let msg) { print("Network error thrown: \(err) \(msg)") }
Further Information
For more information on the Perfect project, please visit perfect.org.
Github
Help us keep the lights on
Dependencies
Used By
Total: 0 | http://swiftpack.co/package/batschz/facepushserver | CC-MAIN-2019-43 | refinedweb | 316 | 53.17 |
Created on 2010-03-25 15:39 by cane, last changed 2017-12-30 20:56 by ned.deily.
When trying to run Python 2.6.5 & 3.1 IDLE GUI on Windows 7, I receive the following error that the "IDLE's subprocess didnt make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection."
I've researched this error and tried following the steps to troubleshoot this error without any success. I do not have any firewall software installed or have the Microsoft firewall enabled. When following issue 8099 and tring to set the TCL and TK library to the idle.py I get the error that I need to check the path and permissions. (see attached screenshot)
These workstations are setup in active directory enviroment where each username that logs into the workstation is a local administrator and the referenced "M:\" drive is a home directory that is mapped for them.
When the local administrator account logs into the workstation the user can execute the IDLE GUI without any issues.
I found reference in one article that the os.py creates a directory called ".idlerc".
I'm wondering if there is a way to hardcode a reference path that doesnt point to my "M:\" drive for this directory or is there something else that I can try to fix this issue?
Thanks,
Bryan
I just reproduced this by removing write access from my user for my home directory.
It seems odd that you wouldn't have write access to your home directory in the first place, but that's tangent to the issue. I'll see if there are other acceptable locations to attempt to place the config directory.
Let's close the issue next week as report doesn't make sense if issuer has no write access to own home dir.
Please don't close it. Users in this situation can't use IDLE. We should at least try alternative locations to create this directory or perhaps prompt them for a directory they'd like to use.
Good point. Thank you.
Issue17864 is a duplicate of this. Also, note the error reported here is a result of IDLE not having write access in the user's home directory and thus cannot create the .idlerc directory. IDLE seems to handle more gracefully the case of .idlerc existing but not writable.
I think the proper solution is to warn "Cannot write .idlerc to your home directory. Settings will not be saved." and continue.
.idlerc is a directory that contains the user versions of config-xyz.def. There are currently 4, another will probably be added. The 'offending' code is in configHandler.IdleConf.GetUserCfgDir() (line 195).
if not os.path.exists(userDir):
try:
os.mkdir(userDir)
except OSError:
warn = ('\n Warning: unable to create user config directory\n'+
userDir+'\n Check path and permissions.\n Exiting!\n\n')
sys.stderr.write(warn)
raise SystemExit
The last line could be replaces by 'return None'. The calling code that uses the normal return would have to be changed to skip trying to read and write the config files. According to Ned, Idle already manages if .idlerc exists but is unwritable.
The recent files list and breakpoint lists are also stored in .idlerc. Here are the 4 hits for 'GetUserCfgDir' in C:\Programs\Python34\Lib\idlelib\*.py ...
EditorWindow.py: 145: self.recent_files_path = os.path.join(idleConf.GetUserCfgDir(),
PyShell.py: 131: self.breakpointPath = os.path.join(idleConf.GetUserCfgDir(),
configHandler.py: 184: userDir=self.GetUserCfgDir()
configHandler.py: 195: def GetUserCfgDir(self):
While Idle could continue without any of these, I like Brian's idea of asking for an alternative first. Actually, if there is no 'home' directory (if expanduser('~') below fails), Idle already tries the current directory as a backup. It could do the same if a home directory exits but is unusable.
How far should we go with this? A command-line option to set the user cfg dir? That should be doable.
I thought about a new idlelib/config-users.def mapping users to directories, but that has its own problems of write permission, as will as cross-platform access to user name.
The OP (Bryan) asked "[is] there is a way to hardcode a reference path that doesnt point to my "M:\" drive for this directory[?]" Yes. GetUserCfgDir starts with
cfgDir = '.idlerc'
userDir = os.path.expanduser('~')
Replace the second line, after each install, to point to a writable directory.
The use case reported here sounds like a classroom or lab environment with many people (and likely novices) using open environment machines. In such cases, if users don't have write access to their home directories, it seems to me that there's no need to try to preserve IDLE configurations across sessions. Asking the user for another location in such cases would be confusing and add needless complexity. I'd say either just create a temporary directory or create the config files as temporary files as needed. For advanced users, I suppose a command line option could be added but has there been any demand for such a feature?
I don't know of any request for a new option, so I would go for something simple that lets Idle run. Temporary files are a good idea for breakpoints (temporary anyway, I think) and maybe for recent files.
See Issue14576 which is the same underlying issue.
I would also like to add that the location of this directory is not correct for Windows software. This directory should be created in %APPDATA% where users by default do have write permissions.
If there are plans to ever make this application portable then it should support a command line option to specify the location of the configuration as well.
Idle is not unique. Several other apps ported from unix also put .xyx files in the home directory on Windows. What is unusual, if not unique, about IDLE is the need to run multiple versions. If .idlerc is moved, already released versions will not be able to access it. I am not ready to make a break yet. In any case, moving it to a subdirectory of $HOME will not solve this issue, which is not being able to write to $HOME, and it therefore a different issue.
Hi Terry,
I did not just make that stuff up on my last post, that is actually the standard for Windows applications. Yes, many Linux ports get it wrong but is that any reason to ever perpetuate a bad practice?
To see the standards you can download the Windows SDK but to make things easier for you here is a link that talks about this:
By using non-standard conventions the application is just asking for trouble like what is being manifested with this issue. Users will be able to write to their %APPDATA% area, no administrator would lock that down as it would cause too many applications to fail (Kiosk type installations not included).
Can I suggest that this issue continues to be about IDLE not being able to write its preferences directory/files due to permissions, and we create a new issue for the fact that IDLE is storing it in the wrong place under Windows?
Yes, that is what this issue *is* about. IDLE, like Python itself, expects to be run on machines that users can write to normally, that have not been crippled by bureaucratic policies, or by users. The editor is useless if the user cannot write files somewhere. None of this is specific to Windows.
Ned's idea of a temporary directory seems easy. Change the warning and replace'raise SystemExit' with 'td = tempfile.TemporaryDirectory(); userDir = td.name' (see msg222694). For 2.7, however, TemporayDirectory is not available. For the underlying mkdtemp, the user "is responsible for deleting the temporary directory and its contents when done with it." (I presume failure of apps to do this is why temp file cleanup is needed.) My inclination for 2.7 would be to copy a stripped down copy of TemporaryDirectory with just the finalizer code. In fact, since we do not need or want the implicit cleanup warning (do we?), we could do that for all versions.
We do not have to accommodate all possibilities. One report, possibly on Stackoverflow, was from a user whose home dir and, I presume, appdata dir, were on a remote server. Maybe he also had an offline home dir, I don't remember. In any case, IDLE was not happy with the setup.
As I said before, permanently and unconditionally moving user config files on Windows will break compatibility with all previous releases, so I would not do that unless we decide to break compatibility anyway, for reasons other than MS's recommendations. However, after a general solution is applied we could consider in a separate issue using Appdata as future-looking, Windows-specific alternative to a temporary directory. However, this would require careful though lest we end up with two userdirs because the non-writability of homedir is only temporary.
Attached is a patch that I think will work. I have not tested it because I do not know how to make my home directory, and only my home directory, read-only.
Brian, how did you do it? I now have Win 10.
When I rt click user/terry and select properties, there is a tri-state box "[ ] Read-only (Only applies to files in folder)". It initially has a solid square, and changes to blank and checkmark. Try to apply, there is an unselectable grayed-out choice to apply to the directory only and a mandatory choice to also apply recursively to all files and subdirectories. I am loath to do this since there are 47000 files (40000 in appdate, which seems grossly excessive, but that is the report)
I would like this tested anyway at least once on linux and mac. Testing procedure: change name of .idlerc, lock home dir, run installed IDLE from command line. Should exit with message. Run patched repository IDLE. Should run, reporting temp dir and deleting it on exit. (Unlock home dir and rename .idlerc back).
Checked on Linux and Mac - doesn't work correctly. mkdtemp() returns a different name every time it's called, and GetUserCfgDir() is called in three places, meaning we end up with three different tmp directories (which on quick examination didn't all get cleaned up at end).
I'd suggest changing it so that GetUserCfgDir() caches its result and returns the cached version on subsequent calls. Running out the door so don't have time to try this myself right now...
Just a note that the 'store things in APPDATA' is issue #24765
OK: call GetUserCfgDir once on creation of idleConf instance and set .userdir attribute. Replace repeated calls in PyShell and Editor with attributes accesses. I tested that, with patch, IDLE starts and rewrites both breakpoints.lst and recent-files.lst as appropriate.
Better, but alas still not quite. On further investigation, the issue is that a new instance of idleConf is instantiated in the subprocess, which then calls mkdtemp() returning a different name. You can see this by doing 'restart shell' and noting that it will hit the warning you added in GetUserCfgDir.
There are multiple places that the subprocess does access preferences, so just eliminating them is probably not the right way to go.
I'd probably recommend that the user prefs dir be communicated to the subprocess somehow. Two suggestions are via adding a command line parameter where we launch the subprocess (build_subprocess_arglist), or have the subprocess get it via 'remotecall' when it starts up (perhaps in MyHandler.handle). Either way this would then need to be communicated to idleConf so it uses that directory.
Would there be a preference and/or any other alternatives?
I cannot currently think of any reason why the subprocess *needs* to access idleConf, so I want to put this on hold while investigating whether the accesses can be eliminated as part of #25507.
Without being able to make my homedir read-only, I don't, of course, see the messages. Since run does not use idleConf (I am now sure), you should have been able to proceed after clicking away the admittedly obnoxious repeat messages. The old extraneous user-process temporary should disappear when you restart. Can you check that?
Problems with patching 2.7 are no longer relevant.
To test, we should refactor config so that the attempt to find and access $HOME and .idlerc are isolated in a function that can be mocked to simulate various problems.
def get_rc():
"""Return a directory path that is readable and writable.
If not possible, return an error indicator. <to be determined>
"""
Testing such a function is a different issue, but I would just reuse existing code.
> GetUserCfgDir() is called in three places, (buried in one of Mark's posts). On the fact of it, this seems like something that should be fixed.
My second patch did fix that. I think I will extract that part immediately.
New changeset 223c7e70e48eb6eed4aab3906fbe32b098faafe3 by terryjreedy in branch 'master':
bpo-8231: Call idlelib.IdleConf.GetUserCfgDir only once. (#2629)
New changeset 552f26680d3806df7c27dd7161fd7d57ac815f78 by terryjreedy in branch '3.6':
[3.6] bpo-8231: Call idlelib.IdleConf.GetUserCfgDir only once. (GH-2629) (#2631)
#27534 is about reducing the imports into the user runcode process. Without rechecking the reason for each import, I think it possible that this might result in config not being indirectly imported.
How about only taking warning when not able to create dir at GetUserCfgDir(), then take the permission handler in other place?
e.g. when user trying to save the config in bad dir, pop-out a dialog to tell it is permission problem or dir not eixsts...etc.
I closed #30918 as a duplicate of this. It has full 'set' and expanduser info.
Another duplicate: #32411, MacOS 10.3.1, user apparently cannot write to home dir. Starting IDLE in terminal results in
Warning: unable to create user config directory
/Users/Steve Margetts/.idlerc
Check path and permissions.
Exiting!
Ned diagnosed #32447 as likely due to the space in the user name. What matters for this issue is that a) someone can do that and b) IDLE likely could continue running anyway. There might then be problems with saving files from the editor, so any warning message should include that possibility.
> Ned diagnosed #32447 as likely due to the space in the user name.
Actually, that's not what the primary problem was. It was a severely misconfigured home directory, both permissions and groups. I'm not sure how that situation was created (possibly through inadvertent sysadmin commands from the shell) but it's not something that IDLE needs to worry about; such a configuration breaks lots of other system programs. In other words, the OP's system was broken. | https://bugs.python.org/issue8231 | CC-MAIN-2020-50 | refinedweb | 2,479 | 65.93 |
>1. Can we have multiple files in DFS use different block sizes ?
No, current this might not be possible, we have fixed sized blocks.
>2. If we use default block size for these small chunks, is the DFS space
>wasted ?
DFS space is not wasted, all the blocks are stored on individual datanode's filesystem as
is. But you would be wasting NameNode's namespace. NameNode holds the entire namespace in
memory, so, instead of using 1 file with 128M block if you do multiple files of size 6M you
would be having so many entries.
> If not then does it mean that a single DFS block can hold data from
>more than one file ?
DFS Block cannot hold data from more than one file. If your file size say 5M which is less
than your default block size say 128M, then the block stored in DFS would be 5M alone.
To over come this, ppl usually run a map/reduce job with 1 reducer and Identity mapper, which
basically merges all small files into one file. In hadoop 0.18 we have archives and once HADOOP-1700
is done, one could open the file to append to it.
Thanks,
Lohit
----- Original Message ----
From: "Goel, Ankur" <Ankur.Goel@corp.aol.com>
To: core-user@hadoop.apache.org
Sent: Friday, June 27, 2008 2:27:57 AM
Subject: HDFS blocks
Hi | http://mail-archives.apache.org/mod_mbox/hadoop-common-user/200806.mbox/%3C11616.48091.qm@web53611.mail.re2.yahoo.com%3E | CC-MAIN-2017-39 | refinedweb | 230 | 82.34 |
Now that we are done with the preliminaries, I'd like to show you how to design and develop a small application -- a symbolic calculator. It's a console application: the user types in expressions that are evaluated, and the results are displayed. To make it more interesting, the calculator supports symbolic variables that can be assigned and re-assigned and used in expressions. Here's an example of a user session:
In fact you can run the calculator right here, on the spot:
import Text.Parsec import Text.Parsec.String import Text.Parsec.Token import Text.Parsec.Expr import Text.Parsec.Language import qualified Data.Map as M import qualified Control.Monad.State as S import Control.Monad.Error import Control.Monad.Identity -- Lexer def = emptyDef { identStart = letter , identLetter = alphaNum , opStart = oneOf "+-*/=" , opLetter = oneOf "+-*/=" } lexer :: TokenParser () lexer = makeTokenParser def -- Expression tree data Expression = Constant Double | Identifier String | Addition Expression Expression | Subtraction Expression Expression | Multiplication Expression Expression | Division Expression Expression | Negation Expression | Assignment Expression Expression deriving Show -- Parser parseNumber :: Parser Expression parseNumber = do v <- naturalOrFloat lexer case v of Left i -> return $ Constant $ fromIntegral i Right n -> return $ Constant n parseIdentifier :: Parser Expression parseIdentifier = do i <- identifier lexer return $ Identifier i parseExpression :: Parser Expression parseExpression = (flip buildExpressionParser) parseTerm [ [ Prefix (reservedOp lexer "-" >> return Negation) , Prefix (reservedOp lexer "+" >> return id) ] , [ Infix (reservedOp lexer "*" >> return Multiplication) AssocLeft , Infix (reservedOp lexer "/" >> return Division) AssocLeft ] , [ Infix (reservedOp lexer "+" >> return Addition) AssocLeft , Infix (reservedOp lexer "-" >> return Subtraction) AssocLeft ] , [ Infix (reservedOp lexer "=" >> return Assignment) AssocRight ] ] parseTerm :: Parser Expression parseTerm = parens lexer parseExpression <|> parseNumber <|> parseIdentifier parseInput :: Parser Expression parseInput = do whiteSpace lexer ex <- parseExpression eof return ex -- Evaluator type SymTab = M.Map String Double type Evaluator a = S.StateT SymTab (ErrorT String Identity) a runEvaluator :: Evaluator Double -> SymTab -> Either String (Double, SymTab) runEvaluator calc symTab = runIdentity $ runErrorT $ S.runStateT calc symTab eval :: Expression -> Evaluator Double eval (Constant x) = return x eval (Identifier i) = do symtab <- S.get case M.lookup i symtab of Nothing -> fail $ "Undefined variable " ++ i Just e -> return e eval (Addition eLeft eRight) = do lft <- eval eLeft rgt <- eval eRight return $ lft + rgt eval (Subtraction eLeft eRight) = do lft <- eval eLeft rgt <- eval eRight return $ lft - rgt eval (Multiplication eLeft eRight) = do lft <- eval eLeft rgt <- eval eRight return $ lft * rgt eval (Division eLeft eRight) = do lft <- eval eLeft rgt <- eval eRight return $ lft / rgt eval (Negation e) = do val <- eval e return $ -val eval (Assignment (Identifier i) e) = do val <- eval e S.modify (M.insert i val) return val eval (Assignment _ _) = fail "Left of assignment must be an identifier" defaultVars :: M.Map String Double defaultVars = M.fromList [ ("e", exp 1) , ("pi", pi) ] --runEvaluator returns Either String (Double, SymTab Double) calculate :: SymTab -> String -> (String, SymTab) calculate symTab s = case parse parseInput "" s of Left err -> ("error: " ++ (show err), symTab) Right exp -> case runEvaluator (eval exp) symTab of Left err -> ("error: " ++ err, symTab) Right (val, newSymTab) -> (show val, newSymTab) loop :: SymTab -> IO () loop symTab = do line <- getLine if null line then return () else do let (result, symTab') = calculate symTab line putStrLn result loop symTab' main = loop defaultVars -- show -- Enter expressions, one per line. Empty line to quit --
This is not the implementation I'll be describing. I wrote this one using Haskell libraries such as Parsec (one of the standard parsing libraries) and several monad transformers. In this series of tutorials I'd like to implement the same functionality from scratch, so you'll be able to clearly see each step and learn the language in the process.
The Design
- At the very high level, the calculator is a loop that gets a line of text from the user and then calculates and displays the result.
- The calculation is done is three steps:
- Lexical analysis: The string is converted to tokens
- Parsing: Building an expression tree
- Evaluation: The expression is evaluated.
In the first phase of implementation we won't worry about error handling and symbolic variables -- we'll add them later.
Notice that there's nothing Haskell-specific in this design -- it's just a piece of good old software engineering. Some people worry that programming in Haskell means re-learning everything from scratch. This is based on seriously underestimating the amount of software engineering that is common to all programming tasks -- independent of the language.
Designing with Types
We haven't talked about types yet because, even though Haskell is a strongly typed language, it has a powerful type inference system. This feature makes quick prototyping easy. Sometimes you just want to write a function and not worry about types. The compiler will figure them out. (C++11 introduced a modicum of type inference with the keyword
auto.)
On the other hand, software design in Haskell often starts with types. Let's try this approach.
1. Lexical analyzer
Lexical analyzer is implemented as a function
tokenize that takes a string (of type
String) and returns a list of tokens. We'll define the
Token data type later. A list of tokens has the type
[Token] -- the square brackets are used to create lists (both list types, like
[Int], and list literals, like
[1, 2, 3]). Finally, a function type is constructed with an arrow
-> between the type of the argument and the type of the result (we'll get to multi-argument functions later). Putting all this together, we can write the Haskell type signature for the function
tokenize as follows:
tokenize :: String -> [Token]
This is read as: Tokenize is a function taking a string and returning a list of tokens. The double colon is used to introduce a type signature.
Type names must always start with capital letters, as in
String or
Double (except for names constructed from special characters, like the list type,
[]).
2. Parser
The parsing function takes a list of tokens and produces an expression. We'll define the
Expression type later. For now, this is the type of
parse:
parse :: [Token] -> Expression
3. Evaluator
We'll make
evaluate take an
Expression and return a value of the built in type
Double (double precision floating point number).
evaluate :: Expression -> Double
We can define dummy data types for
Token and
Expression, and dummy function bodies; and fire up the compiler to typecheck our design:
data Token data Expression tokenize :: String -> [Token] tokenize = undefined parse :: [Token] -> Expression parse = undefined evaluate :: Expression -> Double evaluate = undefined main :: IO () main = putStrLn "It works, so are we done yet?"
You might wonder how
undefined plays with the type checker. It turns out that the type of
undefined is the bottom of the type hierarchy, which means it can be implicitly converted to any type. For instance, in the definition of
tokenize the type of
undefined becomes the function type:
String->[Token].
The type of
main is always
IO (): the type of
IO monadic action that produces no result (only side effects). The type
() itself is called unit -- loosely corresponding to
void in C-like languages.
Recursion
Our design calls for a loop that accepts user input and displays the results. All loops in Haskell are implemented either using recursion or using (higher-order) functions whose implementation uses recursion.
You might be concerned about the performance or recursion and the possibility of blowing the stack -- in most cases this is not a problem since the compiler is able to turn most recursions into loops. This is called tail recursion optimization, where the recursive call at the very end of a function is simply turned into a goto to the beginning of the function. More serious performance concerns arise occasionally from Haskell's laziness but we'll talk about it later.
With that in mind, we are ready to implement the top-level loop of our calculator:
main :: IO () main = do line <- getLine putStrLn line main
You can think of
main as first calling
getLine, storing the result in the variable
line, then calling
putStrLn with that
line, and then calling itself again. This will create an infinite loop, but no stack will be hurt in the process, since this is a typical case of tail recursion.
Of course, what really happens when the program is running is slightly different because of the
IO monad and general laziness. So it would be more appropriate to say that
main is an
IO action that is a sequence of three other actions: the ones returned by
getLine,
putStrLn, and
main. The last action, when the time comes to execute it, will produce three new actions, etc. But everything happens on the need to run basis, so the inner
main will not be evaluated until the (blocking) action produced by
getLine delivers its result.
Thinking about recursion rather than looping might initially seem unnatural, but it quickly becomes second nature. The main reason Haskell doesn't use loops is because of immutability: Loops in imperative languages usually have some kind of mutable counter or a mutable pointer. It's relatively easy to replace those loops with recursion. But first we need to learn about conditionals: We have to be able to break out of recursion at some point.
Conditional
A conditional in Haskell is just a simple
if,
then,
else construct. The
else is mandatory.
Anything between
if and
then is the condition (you don't even have to surround it with parentheses), and it must evaluate to a Boolean. You can pretty much use the familiar equality and comparison operators,
>,
>=,
<,
<=,
==, to create Boolean values; except for the not-equal operator which is
/=. You can also combine them using
&& and
|| for logical and and or, and
not for not.
However, unlike in imperative languages, the Haskell if/then/else is not a statement but an expression (similar to C's (
?:) construct). It evaluates to either the
then or the
else expression, both of which have to be of the same type. For instance:
main = do putStrLn "Enter a number" str <- getLine print (if read str >= 1 then 1 else 0)
Here, the if/then/else expression that is the argument to
You might be wandering about the short-circuitting properties of if/then/else or the binary Boolean operators
&& and
||. In most languages the property of not evaluating the branch that is not taken has to be built into the language as a special feature. Otherwise constructs like:
x = (p != nullptr) ? *p : 0
wouldn't work properly. In Haskell, short-circuiting is just the side effect of laziness. The branch not taken is never used, so it won't be evaluated.
Several explanations are in order: I used the function
read to turn a string into a value. It's an interesting function -- it's overloaded on the return type. Here, the compiler deduced that an integral value was needed because it was compared to another integral value, 1. (Try experimenting with this code by inputing a floating point number. Then change the 1 in the if clause to 1.0 and see if the behavior changes.)
We are now ready to convert a simple imperative loop that prints numbers from 0 to 4 to Haskell. Here's the C++ loop:
for (int i = 0; i < 5; ++i) std::cout << i << std::endl
And here's its recursive counterpart written in Haskell:
loop :: Int -> IO () loop n = do if n < 5 then do putStrLn (show n) loop (n + 1) else return () main :: IO () main = loop 0
The Haskell code looks straightforward, although it's more verbose than its C++ counterpart. That's because I made it control-driven -- which is closer to the imperative version -- rather than data-driven. In Haskell one should really try to think at a higher abstraction level. Here, the goal is to print a list of integers from 0 to 4, so it would be more natural to start with such a list:
[0, 1, 2, 3, 4] or, using a handy shorthand,
[0..4]; and apply a function to it. We'll see examples of this approach later.
Let's talk about types:
loop returns a "void"
IO action, so both branches of the if must also return an
IO () action. The first branch is a sequence of two actions (hence the use of
do in that branch), the last of which is indeed of the type
IO () (that's the result of calling
loop). The second branch is more interesting. At first sight you might not even notice anything out of the ordinary: Well, it does return a unit value
(), which is of the type unit
(). But how does this value become an
IO () action? The trick is that
return is not a built-in keyword, it's actually an important monadic function (every monad has it).
The
return function turns whatever value it's given into a monadic value: here it turns
() into
IO (). It could also turn "Hello!" into
IO String, etc. We'll see more examples of using
return to "return" a value from a
do block in the future. Also notice the use of the
Int type -- it's a fixed precision integer.
In the next installment we'll start implementing the lexical analyzer and learn more about data types.
Exercises
Ex 1. Print squares of numbers from 1 to 10.
loop :: Int -> IO () loop n = undefined main :: IO () main = loop 1
loop :: Int -> IO () loop n = do if n <= 10 then do putStrLn (show (n * n)) loop (n + 1) else return () main :: IO () main = loop 1
Ex 2. No exposition of recursion is complete without factorial. Use the following property: Factorial of n is n times the factorial of (n - 1), and the factorial of 0 is 1.
fact :: Int -> Int fact n = undefined main = print (fact 20)
fact :: Int -> Int fact n = if n > 0 then n * fact (n - 1) else 1 main = print (fact 20)
Ex 3. The evaluation of factorial starts returning incorrect results right about n = 21 because of the
Int overflow. Try implementing a version that uses the infinite precision
Integer instead of
Int.
fact :: Int -> Int fact n = if n > 0 then n * fact (n - 1) else 1 fullFact :: ... fullFact n = ... main = do print (fact 23) print (fullFact 23)
fact :: Int -> Int fact n = if n > 0 then n * fact (n - 1) else 1 fullFact :: Integer -> Integer fullFact n = if n > 0 then n * fullFact (n - 1) else 1 main = do print (fact 23) print (fullFact 23)
Ex 4. No exposition of recursion is complete without Fibonacci numbers. (I'm using these mathematical examples because we haven't learned about data structures. In general, Haskell is not just about math.) Use the following property of Fibonacci numbers: The n'th Fibonacci number is the sum of the (n-1)'st and the (n-2)'nd, and the first and second Fibonacci numbers are both 1.
fib :: Int -> Int fib n = undefined main = print (fib 20)
fib :: Int -> Int fib n = if n > 2 then fib (n - 1) + fib (n - 2) else 1 main = print (fib 20) | https://www.schoolofhaskell.com/user/bartosz/basics-of-haskell/4-symbolic-calculator-recursion | CC-MAIN-2016-50 | refinedweb | 2,486 | 60.04 |
Catalog
- Preface
- Idempotency
- Lock properties
- Distributed lock
- design goal
- Design thinking
- boundary condition
- Main points of design
- Different implementations
- Concluding remarks
Preface
Suddenly I feel that it is almost imaginary to want to live a safe life. Most of the prosperous times in history are only thirty or forty years. How to ensure that I live a long time must be in those thirty or forty years (but I really hope that the future will be better and better, the general situation will be better and the individual will be better). A stable job, people who love themselves, people they love feel a little extravagant. I know that these kinds of things will be found slowly in life. Maybe in the blink of an eye, I will look back on the past and find that the choice of these things is not free, but driven by time. It seems that with the growth of age, I feel more and more insignificant. The past things are fixed and more and more. The less choices I have in the future, the less choices I have.
Alas, back to the truth, I had written that CAP last time, and I felt I had checked a lot of information, and I wrote it well. Looking back now, what is it about? Or is the brain too dumb, belongs to the kind of people who remember slowly, understand slowly and forget quickly. So think about how good it is this time, at least better than last time. By the way, when I have time to fill in the last article, I’ll start with a flag.
Idempotency
Definition
The definition of idempotency in HTTP/1.1 is: one or more requests for a resource forResources themselvesThe same results should be achieved (except for network timeouts, etc.). In terms of functions, f…. F (f (x)= f (x).
objective
1. Prevent the disastrous consequences of requesting retries in important operations such as transactions and transfers.
The scope of idempotency
request
1. Reading Request – Natural Idempotency
2. Writing requests – non-idempotent, need to be controlled.
Database level
1. INSERT is non-idempotent and needs to be controlled by content.
2. UPDATE – Controls idempotency through WHERE conditions, and operates as little as possible with relative values. For example, UPDATE table1 SET column1 = column1 + 1 WHERE column2 = 2;
The results of each execution change, which is not idempotent. UPDATE table1 SET column1 = 1 WHERE column2 = 2; no matter how many times the execution is successful, the state is the same, so it is idempotent operation.
3. DELETE – Similarly controlled by WHERE conditions, minimizing the use of relative values. if there be
DELETE table1 WHERE column1 < now (); should be changed to DELETE table1 WHERE column1 < “2019-10-01”; better.
Business level
1. In the case of redundant deployment of multiple services, requests are consumed concurrently, so requests need to be converted from parallel to serial.
Strategies to Guarantee Identity
The essence of guaranteeing idempotency is to do a good job of serialization of resources, which is multi-directional.
Needless to say, new requests need to do a good job of weight-proof processing of important control attributes, and update requests can also be well controlled by optimistic locks.
In distributed systems, requests are processed from parallelism to serialization through distributed locks.
Lock properties
1. reentrant/non-reentrant
Reentrant:Same threadAfter the outer function acquires the lock, the code contained in the inner function to acquire the lock is unaffected. It is not necessary to apply for the lock again, but can be directly invoked for execution.
Non-reentrant: On the contrary, locks are acquired inside and outside the same thread before execution. It’s easy to cause deadlocks.
Both synchronized and Reentrantlock are reentrant locks
2. Fairness/Inequity
Fairness: First come, first served, threads requesting locks form queue consumption locks.
Unfair: When the request comes, it requests the lock first, then executes it, and throws it at the end of the queue waiting for the lock to be acquired.
Synchronized unfairness
Reentrantlock Options
3. read / write
Read Lock: It can be read by many people, but it can’t write when reading. Read Lock.
Write locks: When writing, you can’t read, and you can only write by yourself. Write locks.
4. Sharing/monopoly
Exclusive Locks: Only one thread can hold locks at a time, and ReentrantLock is a mutex implemented in an exclusive manner. Monopoly is a pessimistic locking strategy.
Shared Lock: This lock can be held by multiple threads, such as ReadWriteLock. The locking policy is relaxed to allow multiple threads of read operations to access shared resources simultaneously.
Note: AQS (AbstractQueued Synchronized) provides two different synchronization logic, exclusive and shared.
5. Interruptible/non-interruptible
Interruptible: The operation of a lock can be interrupted. ReentrantLock is interruptible.
Uninterruptible: contrary to the above. Synchronized is an uninterruptible lock.
Distributed lock
It is used to ensure mutually exclusive access of a particular shared resource in a multi-process environment in a different system. The essence is to avoid duplication of processing by serializing requests for this resource. However, it can not solve the idempotency problem of requests, and it still needs to control the idempotency in business code. A shared storage server needs to be created outside the system to store lock information.
design goal
Security attribute
1.mutex。 Only one client can hold the same lock at any time. That is, the existence of locks is globally strongly consistent.
2.Deadlock freeIt has lock failure mechanism. First, the system providing lock service is highly available and robust. Multi-node service, any node downtime or network partition does not affect the acquisition of locks; the second is the automatic renewal and release of locks by the client.
3.SymmetricFor any lock, the lock and unlock must be the same client.
Efficiency attribute
1.fault-tolerantHigh availability, high performance. The service itself is highly available and the system is robust. Multi-node service, any node downtime or network partition does not affect the acquisition of locks.
2.ReentrantIt can effectively reduce the occurrence of deadlock.
3. Multiple choices, you can choose the time to try to acquire the lock.
4. Client calls are simple. The code of lock is highly abstract and the service access is minimal.
Design thinking
1. Control of shared resources. The identifier of a resource can be used as the key of the lock, so that the resource can be controlled by a unique process.
2. Uniqueness control of locks. The process that acquires the lock obtains a unique identifier (value) separately as a condition for the release of the lock, so as to avoid the release of the lock by other processes. At the same time, the atomicity of judgment and del operation should be guaranteed to prevent mistake deletion. Usually Lua scripts are used.
3. Avoid releasing the process without executing the lock and accessing the resources in parallel. The process renews the contract in time to avoid the release of uncompleted business locks.
4. Prevent deadlock. The lock is released regularly and the release time is set.
boundary condition
1. The stability and consistency of the service itself that provides lock registration.
2. The lock failed to renew the lease as scheduled. Such as unsuccessful renewal of heartbeat, service startup GC, service suspension time during GC exceeds the effective time of lock, etc.
3. Business fake death, TTL is still continuing.
Main points of design
1. Registration of locks. To ensure that the registration of locks is atomic, that is, to determine whether locks exist and registration is serialized.
2. The renewal of locks. How to renew the lease of a lock with minimal invasion of business code? CAS atomicity.
3. The release of locks. The user of the lock releases only the lock he holds.
Different implementations
Redis implementation
principle
1. single instance implementation principle
The unique thread of redis is serial processing, that is, it is an idempotent linear system. But this can cause a single point of failure. If redis uses master-slave mode, because redis replication is asynchronous, it will occur:
1. Client A acquires the lock at the master node. 2. The master node crashes before writing information to slave. 3. Slave was promoted to master. 4. Client B acquires the lock of the same resource as A. At this time, the resource enters the multi-process concurrent consumption state. It goes against the principle of mutual exclusion.
Of course, the probability of this situation is very low, and resources are generally acceptable if they are not sensitive to this situation.
Or the redis implementation of a single instance is used, in which case the program needs to be more tolerant of single point failures.
2. Implementation of RedLock algorithm
The essence of this implementation is to implement a consistency protocol in redis cluster to realize the unique key. However, all nodes are required to be redis master nodes, and completely independent of each other. There is no master-slave replication or other cluster coordination mechanism.
If a lock is to be acquired, the operation performed by the client requires: 1. Client gets the current time (milliseconds) 2. Use the same key and different values to request N master nodes in order. ( This step requires the client to set a multi-request timeout time that is less than the automatic release time. For example, when the automatic release time is 10 seconds, set the timeout time to 5-50 milliseconds. This is used to prevent excessive consumption of time on the downsized nodes. One node is unavailable and the next node should be requested immediately. ) 3. The client calculates the time consumed to acquire the lock (the time consumed by step 2 - the current time - the time acquired by step 1). If and only if the lock is acquired at N/2+1 node, the acquisition time of the lock is less than the failure time of the lock, then the acquisition of the lock is considered successful. 4. If a lock is acquired, its effective time can be regarded as the initial expiration time-the consumption time of the acquired lock. 5. if the client acquires the lock failure, no matter what causes (in fact, for two reasons, one is the success node is less than N/2+1, the two is timeout), it will release the lock at all nodes (even those nodes that do not get the lock at all).
According to the above algorithm description, redis cluster needs at least three master nodes. And the implementation is also more cumbersome. The overhead of locking is too high. The requirement of redis operation is also high. Overall, the cost of implementation is too high. Fortunately, somebody has already done it – Redisson
Implementation of Single Instance
Previous data are generally registered with the command setNX to lock, but this command can not set the automatic expiration time, only by setting value as a timestamp to control expiration, in fact, it is not very good. Now redis officially recommends using the SET command.
SET resource-name anystring NX EX max-lock-time Since version 2.6.12, SET commands support multiple operations: EX seconds -- Set the specified expire time, in seconds. PX milliseconds -- Set the specified expire time, in milliseconds. NX -- Only set the key if it does not already exist. XX -- Only set the key if it already exist.
When releasing locks, we can use Lua scripting language to guarantee atomicity for the release of locks.
if redis.call("get",KEYS[1]) == ARGV[1] then return redis.call("del",KEYS[1]) else return 0 end
Specific Code Implementation
Letuce is used here.
1. The Realization of Lock
/** * If the lock is idle, the current thread gets the lock * If the lock has been held by another thread, disable the current thread until the current thread acquires the lock */ @Override public void lock() { // Selected Synchronization Mode while (true){ if (tryLock())return; this.sleepByMillisecond(renewalTime >> 1); } } /** * Attempt to acquire locks * If the lock is available, return true or false * @return */ @Override public boolean tryLock() { // Use ThreadLocal to save the current lock object as a re-entrant lock control if (threadLocal.get() != null) return true; String set = statefulRedisConnection.sync().set(lockKey, lockValue, new SetArgs().nx().ex(lockTime)); if (set != null && "OK".equals(set)){ System.out.println ("thread id:"+Thread.current Thread(). getId ()+ "lock success! Time:"+LocalTime.now ())); isOpenExpirationRenewal = true; threadLocal.set(this); this.scheduleExpirationRenewal(); return true; } return false; } /** * Attempt to acquire locks * Return true within a specified time or false * @param time * @param unit * @return * @throws InterruptedException */ @Override public boolean tryLock(long time, TimeUnit unit) throws InterruptedException { // Get access time LocalTime now = LocalTime.now(); while (now.isBefore(now.plus(time, (TemporalUnit) unit))){ if (tryLock())return true; } return false; }
2. Rental renewal of locks
There may be two problems, one is that the main thread hangs and the renewal thread cannot be closed; the other is that the renewal itself fails. At present, it seems that the best way for the first case is to set exception capture in the lock code, and finally unlock in the final code block. The second way is to do a good job of log analysis code problems to solve. But if you have a better way to achieve it, you are welcome to preach.
@Override protected void scheduleExpirationRenewal() { Thread thread = new Thread(new ExpirationRenewal()); thread.start(); } private class ExpirationRenewal implements Runnable{ @Override public void run() { while (isOpenExpirationRenewal){ try { Thread.sleep(renewalTime); } catch (InterruptedException e) { e.printStackTrace(); } String expirFromLua = "if redis.call('get', KEYS[1]) == ARGV[1]" + " then " + "return redis.call('expire',KEYS[1],ARGV[2])" + " else " + "return 0" + " end"; Object eval = statefulRedisConnection.sync().eval(expirFromLua, ScriptOutputType.INTEGER, new String[]{lockKey}, lockValue, lockTime.toString()); System.out.println ("Release Acquisition Result Value:" + Eval + ((long) eval==1), "Rent Success", "Rent Failure"); } } }
3. Release of locks
@Override public void unlock() { // Closing Rent Rental isOpenExpirationRenewal = false; // delete lock String delFromLua = "if redis.call(\"get\", KEYS[1]) == ARGV[1]" + " then " + "return redis.call(\"del\",KEYS[1])" + " else " + "return 0" + " end"; Long eval = statefulRedisConnection.sync().eval(delFromLua, ScriptOutputType.INTEGER, new String[]{lockKey}, lockValue); if (eval == 1){ System. out. println ("lock release success"); }else { // Better keep a log System. out. println ("The lock has been released"); } }
4. summary
It only realizes reentrant. We need to consider how to achieve fair and unfair switching, and read-write locks. The key to achieve fair lock is to maintain a cross-process request lock queue, which can only be implemented by redis itself.
2. Implementation of RedLock algorithm
1. Introduction to Redisson
Get lazy and move directly to an official profile
Based on high-performance async and lock-free Java Redis client and Netty framework.
Redisson function is still very strong, recommended to have time to go to see (there is a Chinese document Oh).
2. Implementation of RedLock algorithm
Redissond’s RedissonRedLock object implements the lock algorithm introduced by RedLock. The method of use is as follows.
RLock lock1 = redissonInstance1.getLock("lock1"); RLock lock2 = redissonInstance2.getLock("lock2"); RLock lock3 = redissonInstance3.getLock("lock3"); RedissonRedLock lock = new RedissonRedLock(lock1, lock2, lock3); // Simultaneous Locking: Lock1 lock2 lock3//Red Lock locks successfully on most nodes. lock.lock(); ... lock.unlock();
Multiple redissonInstance1 and RLock objects need to be created to form distributed locks. The data structure chosen in redis is hash, which is convenient to realize fair lock.
It’s still troublesome to use.
Zookeeper implementation
principle
Zookeeper is a distributed coordination service, which implements the consistency protocol. It is divided into Sever and Client. Server is a distributed application. Zab conformance protocol, temporary sequential node of znode and watchs mechanism are the key to realize distributed lock in zookeeper.
1. Data model of zookeeper
The hierarchal name space (hierarchical namespace) can be seen as a distributed file system. It has a tree structure, and each node is separated by “/”. Most Unicode characters can be used as node names.
The node in zookeeper tree is called znode. A stat data structure is maintained in znode node, which stores data changes, ACL (Access Control List) – access control list. Each node exists. This list restricts who can do what. It is equivalent to the permission list of the node, including the version number of CREATE, READ, WRITE, DELETE, ADMIN changes and the corresponding timestamp.
The reading and writing of znode is atomic.
The nodes of znode can be divided into:
- Persistent Nodes
Even after the client that created the particular znode disconnects, the persistent node still exists. By default, all znodes are persistent unless otherwise specified.
- Temporary node Ephemeral Nodes
Temporary nodes are created with client and server sessions. When a session ends, the corresponding znode will be deleted. Therefore, temporary nodes cannot create child nodes. Temporary nodes play an important role in leader election.
- Sequence Nodes
Sequential nodes can be persistent or temporary. When a new znode is created as a sequential node, ZooKeeper sets the path of the znode by appending a 10-bit serial number to the original name. The serial number is a monotonically increasing sequence. For example, if a znode with a path / mynode is created as a sequential node, ZooKeeper changes the path to / mynode 0000000001 and sets the next serial number to 0000000002. When the increment exceeds 2147483647 (2 ^ 31 – 1), the counter overflows (resulting in the name “- 2147483648”). Sequential nodes play an important role in locking and synchronization
- Container Nodes Version 3.5.3 Added
This type of node is set for special cases, such as leader, lock, etc. When the last child node in the container is deleted, the container will become a candidate for deletion by the server sometime in the future (personal feeling is similar to regular deletion in redis). Therefore, KeeperException. NoNoNodeException may be thrown when creating Container node subnodes, so it is necessary to determine whether the exception is thrown and recreate the container node after the exception is thrown.
- TTL Nodes Version 3.5.3 Added
This type of node can only be a persistent node. When creating a persistent node or a persistent sequential node, you can choose to set the TTL of the node. If the TTL time of the node has not been modified and there are no child nodes, then the node will be deleted by the server at some time in the future. TTL nodes must be enabled through system properties because they are disabled by default.
2.Watchs
Clients can acquire node change information by setting up a Watchs to listen for znode node information.
Implementation steps
The key to lock is the temporary sequential node and the Watchs mechanism. That is to say, a queue to acquire locks is created by temporary sequential nodes, and the consumption of the previous data is monitored through the watchs mechanism to determine whether they are consumed or not.
1. Create a persistent node lock as the parent of the lock. 2. Client A requesting lock creates temporary sequential node lock000000N under lock node. 3. Get the list of nodes under lock, get the node whose number is smaller than client A. If it does not exist, it means that the current thread has the smallest serial number and gets the lock. 4. If the node whose number is smaller than client A exists, set Watchs to listen on the sub-node. Secondary nodes are deleted to determine whether they are the smallest node or not, and then the lock is acquired.
Apache’s open source library Curator provides the corresponding implementation of distributed locks.
However, there is time to implement one on your own, not only using the usual way above, but also using Container nodes or TTL nodes.
summary
Compared with redis, the implementation of zookeeper saves a lot of trouble without considering the consistency of locks. Wait until I have time to look at the Curator source code.
Etcd implementation
What is etcd?
A distributed key-value pair (K-V) storage service. It implements the consistency algorithm Raft (bad and forgotten, this consistency algorithm feels that it is not written in enough detail, until I have time to study it carefully), so the high availability must be 2N+1. Mainly used to store data that rarely changes, and provide monitoring queries. The service API can be invoked using HTTP / HTTPS / grpc, and the grpc mode has its own api, so it is not necessary to care about renewal when using it. Grpc is available in V3 version. V3 version still supports rest mode invocation of HTTP / https, but it should be noted that V2 version is not compatible with V3 version, commands are mostly modified, data are not interoperable, such as data created by V2 version, V3 version is not visible. And the V3 version of rest call mode support is not very good.
principle
data model
Etcd is the basic key-value structure.
Logical view:
The logical view of the storage area is a flat binary.To be honest, I don’t understand the word very well. I need to check the data. Personally, I understand that it is a hierarchical structure of data, similar to the structure of trees. If you have knowledge, you can give advice and hold your fist.) Key space. Key space maintains multiple reviews. Each transaction creates a new Revision in the Key space. Key’s life cycle is from creation to destruction. Each Key may have one or more life cycles (meaning that it keeps detailed records of key creation, modification and destruction multiple times). If Key does not exist in the current modification, then the current version of Key starts from 1. Deleting Key generates a flag that contains the proxy information for the currently closed key (by resetting version 0). Each version of the key changes itself. The version increases monotonously over the life cycle of a key. Once a compaction occurs, any key life cycle record (closed generation) before compaction revision is deleted, and the values set before compaction revision are deleted (except the latest).
Physical view
The KV data of etcd is persisted to disk in the form of B + book. When updating data, it does not update the original data structure, but generates an updated data structure. So its reading and writing performance can not be compared with redis and the like.
The foundation of etcd implementing distributed lock
- Lease (A short-lived renewable contract that deletes keys associated with it on its expiry) mechanism: the lease mechanism (TTL, Time to Live). Rental renewal.
- Revision (A 64-bit cluster-wide counter that is incremented each time the key space is modified) mechanism: Each Key has a Revision, and each transaction creates a new Revision in the keyspace, which is monotonically incremental with an initial value of zero. The order of write operations is known by the size of the review. When implementing distributed locks, the order of locks acquisition can be determined according to the size of Revision to achieve fair locks.
- Prefix mechanism: Prefix mechanism, also known as directory mechanism. Keys can be created as directories, such as key1 = “/ mylock / 00001” and key2 = “/ mylock / 00002”. Through prefix / mylock range query, the KV list is obtained, and the order of acquisition of locks is controlled by the size of Revision to achieve fair locks.
- Watch mechanism: Watch mechanism, Watch can monitor a single key, but also support scope monitoring.
Implementation steps
Comparing zookeeper with etcd, we find that the overall mechanism is the same. So its implementation is roughly the same.
The steps are as follows:
1. Create the prefix of the global key for the distributed lock of the corresponding resource, such as / mylock, then the key created by the client of the first request is / mylock / uuid001, and the second is / mylock / uuid002. 2. Create lease Lease for the corresponding resource at a specific appropriate time. 3. The client puts its own key, such as / mylock / uuid001. Value here is not empty. Get the revision number of your put and record it. 4. Get all the key value pair information under the key prefix, and sort it according to the size of revision number, the smaller is in the first place. 5. Contrast whether the revision value of put is the smallest. If it is, get the lock and open the renewal thread. No, get the previous lock operation index and create a Watcher to listen on the previous key (which will be blocked).
The latest version of etcd provides the implementation of distributed locks. There is no need to care about the release and renewal of the lock. It is automatically released when unlock is called. The key word lock is basically the same as the above principle.
Etcd java client has etcd4j, jetcd and so on.
summary
The implementation of etcd is relatively simple, and it has realized the distributed lock itself. It is easy to acquire the lock with only one lock command. But etcd seems not to be very common in other ecological applications of java, and it is not very cost-effective if it is only used to acquire distributed locks.
Concluding remarks
Finally, it’s finished. In fact, it’s in a hurry. The zookeeper in the middle hasn’t been written out in detail. The redis one hasn’t implemented a fair lock. Now these things are more and more comprehensive, some do not have to write, but the principles must be understood. In fact, these are the concrete realization of comparison – renewal – release. If you have a chance, you still need to look at the source code and expand your thinking. I hope you have time to look at the source code of Redisson, Curator and Jetcd locks. Alas, always standing flag has been cool, always dragging has been cool, I do not know which one can be better.
Sleep, sleep.
Distributed – distributed lock
Reference material:
redis:
zookeeper:
etcd:
Identity:
Implementation of Zookeeper Distributed Lock:
Etcd lock implementation:
jetcd: | https://developpaper.com/distributed-distributed-lock/ | CC-MAIN-2020-05 | refinedweb | 4,319 | 56.96 |
Martin Fowler
Mentioned 173
This volume is a handbook for enterprise system developers, guiding them through the intricacies and lessons learned in enterprise application development. It provides proven solutions to the everyday problems facing information systems developers..-issue.
I'm thinking about how to represent a complex structure in a SQL Server database.
Consider an application that needs to store details of a family of objects, which share some attributes, but have many others not common. For example, a commercial insurance package may include liability, motor, property and indemnity cover within the same policy record.
It is trivial to implement this in C#, etc, as you can create a Policy with a collection of Sections, where Section is inherited as required for the various types of cover. However, relational databases don't seem to allow this easily.
I can see that there are two main choices:
Create a Policy table, then a Sections table, with all the fields required, for all possible variations, most of which would be null.
Create a Policy table and numerous Section tables, one for each kind of cover.
Both of these alternatives seem unsatisfactory, especially as it is necessary to write queries across all Sections, which would involve numerous joins, or numerous null-checks.
What is the best practice for this scenario?
@Bill Karwin describes three inheritance models in his SQL Antipatterns book, when proposing solutions to the SQL Entity-Attribute-Value antipattern. This is a brief overview:
Using a single table as in your first option is probably the simplest design. As you mentioned, many attributes that are subtype-specific will have to be given a
NULL value on rows where these attributes do not apply. With this model, you would have one policies table, which would look something like this:
+------+---------------------+----------+----------------+------------------+ | id | date_issued | type | vehicle_reg_no | property_address | +------+---------------------+----------+----------------+------------------+ | 1 | 2010-08-20 12:00:00 | MOTOR | 01-A-04004 | NULL | | 2 | 2010-08-20 13:00:00 | MOTOR | 02-B-01010 | NULL | | 3 | 2010-08-20 14:00:00 | PROPERTY | NULL | Oxford Street | | 4 | 2010-08-20 15:00:00 | MOTOR | 03-C-02020 | NULL | +------+---------------------+----------+----------------+------------------+ \------ COMMON FIELDS -------/ \----- SUBTYPE SPECIFIC FIELDS -----/
Keeping the design simple is a plus, but the main problems with this approach are the following:
When it comes to adding new subtypes, you would have to alter the table to accommodate the attributes that describe these new objects. This can quickly become problematic when you have many subtypes, or if you plan to add subtypes on a regular basis.
The database will not be able to enforce which attributes apply and which don't, since there is no metadata to define which attributes belong to which subtypes.
You also cannot enforce
NOT NULL on attributes of a subtype that should be mandatory. You would have to handle this in your application, which in general is not ideal.
Another approach to tackle inheritance is to create a new table for each subtype, repeating all the common attributes in each table. For example:
--// Table: policies_motor +------+---------------------+----------------+ | id | date_issued | vehicle_reg_no | +------+---------------------+----------------+ | 1 | 2010-08-20 12:00:00 | 01-A-04004 | | 2 | 2010-08-20 13:00:00 | 02-B-01010 | | 3 | 2010-08-20 15:00:00 | 03-C-02020 | +------+---------------------+----------------+ --// Table: policies_property +------+---------------------+------------------+ | id | date_issued | property_address | +------+---------------------+------------------+ | 1 | 2010-08-20 14:00:00 | Oxford Street | +------+---------------------+------------------+
This design will basically solve the problems identified for the single table method:
Mandatory attributes can now be enforced with
NOT NULL.
Adding a new subtype requires adding a new table instead of adding columns to an existing one.
There is also no risk that an inappropriate attribute is set for a particular subtype, such as the
vehicle_reg_no field for a property policy.
There is no need for the
type attribute as in the single table method. The type is now defined by the metadata: the table name.
However this model also comes with a few disadvantages:
The common attributes are mixed with the subtype specific attributes, and there is no easy way to identify them. The database will not know either.
When defining the tables, you would have to repeat the common attributes for each subtype table. That's definitely not DRY.
Searching for all the policies regardless of the subtype becomes difficult, and would require a bunch of
UNIONs.
This is how you would have to query all the policies regardless of the type:
SELECT date_issued, other_common_fields, 'MOTOR' AS type FROM policies_motor UNION ALL SELECT date_issued, other_common_fields, 'PROPERTY' AS type FROM policies_property;
Note how adding new subtypes would require the above query to be modified with an additional
UNION ALL for each subtype. This can easily lead to bugs in your application if this operation is forgotten.
This is the solution that @David mentions in the other answer. You create a single table for your base class, which includes all the common attributes. Then you would create specific tables for each subtype, whose primary key also serves as a foreign key to the base table. Example:
CREATE TABLE policies ( policy_id int, date_issued datetime, -- // other common attributes ... ); CREATE TABLE policy_motor ( policy_id int, vehicle_reg_no varchar(20), -- // other attributes specific to motor insurance ... FOREIGN KEY (policy_id) REFERENCES policies (policy_id) ); CREATE TABLE policy_property ( policy_id int, property_address varchar(20), -- // other attributes specific to property insurance ... FOREIGN KEY (policy_id) REFERENCES policies (policy_id) );
This solution solves the problems identified in the other two designs:
Mandatory attributes can be enforced with
NOT NULL.
Adding a new subtype requires adding a new table instead of adding columns to an existing one.
No risk that an inappropriate attribute is set for a particular subtype.
No need for the
type attribute.
Now the common attributes are not mixed with the subtype specific attributes anymore.
We can stay DRY, finally. There is no need to repeat the common attributes for each subtype table when creating the tables.
Managing an auto incrementing
id for the policies becomes easier, because this can be handled by the base table, instead of each subtype table generating them independently.
Searching for all the policies regardless of the subtype now becomes very easy: No
UNIONs needed - just a
SELECT * FROM policies.
I consider the class table approach as the most suitable in most situations.
The names of these three models come from Martin Fowler's book Patterns of Enterprise Application Architecture.
I'm a confused newbie and hobbyist programmer trying to get a grip on this, so forgive me if my question is a little off or doesn't make much sense.
I see a lot of questions on SO revolving around the use of design patterns, and I'm wondering if anyone has a good resources for learning about, and implementing design patterns? I understand the general idea, and know how/when to use a couple of them(Singletons, Factory methods) but I know I'm missing out.
(Just in case it matters, my language of preference is C# but I could learn from examples in other languages)
Applying UML and Patterns by Craig Larman. Start from the basic of analysis, design and uses a simple Case scenario. Introduces most of the basic patterns in a simple way.
An introductory book that I found useful and well written is Design Patterns Explained by Alan Shalloway and James Trott (Addison Wesley).
Do not start from the Gang of Four book, for it is not an introductory book by any means.
The original Design Patterns book is a must-read for all programmers.
It is an excellent book on every level: layout, clarity, insight, depth. It's one of those great books that you first read cover-to-cover, and then use as a reference until you literally know it inside out.
You could start by the Wikipedia page, but treat yourself with the great book too.
Design patterns are great for various reasons:
But when your goal is just to learn design patterns I think you are missing the fundamentals. All design patterns are based on more common principles. High Cohesion, Low Coupling Open Closed Principle, DRY, Liskov Substitution Principle etc. For these fundamentals I would read the following books in this order:
After that you are ready for the basic gang of four design patterns
The next step:
And always remember : the pattern is not the goal !
Before spending money on books I would recommend Wikipedia's excellent design patterns page. Also for something different Google for "design pattern screencasts" or search for "design patterns" on YouTube. Getting the same information presented differently often helps the penny drop.
The Gang of Four book is the definitive text on the most well known patterns but is not that easy to read and with C++ examples not everyone's favourite.
The Head First Design Patterns text is far more accessible but only contains a subset of the Gang of Four patterns.
The most important thing is to understand where and why a particular pattern is useful. Afterwards search the web for implementation examples in the language of your choice and experiment until you "get it". Understand one pattern before moving on to the next. Everyone understands some patterns better than others (and there are hundreds of lesser known ones).
Just keep plugging away.
Head First Design Patterns
and the Design Pattern Wikipedia page are the best resources for beginners. FluffyCat is another good, free online resource for design patterns in both Java and PHP.
The Gang of Four book is where to go afterward, but it's fairly advanced, so I'd wait until you have a pretty firm grasp from the other resources.
I'd add that Design Patterns book from the "Gang of four" is a bible for anyone who is seriously interested in design patterns.
According to Sun and Msdn it is a design pattern.
According to Wikipedia it is an architectural pattern
In comparison to design patterns, architectural patterns are larger in scale. (Wikipedia - Architectural pattern)
Or it is an architectural pattern that also has a design pattern ?
Which one is true ?
MVC always mentioned and introduced as/in presentation layer in software architecture books.
Read these books:
Architecting Microsoft.NET Solutions for the Enterprise (Microsoft press)
Professional ASP.NET design patterns (Wrox)
Enterpise Solutions Patterns Using Microsoft.NET (Microsoft press)
Patterns of Enterprise Application Architecture (Addison Wesley)
A Practical Guide to Enterprise Architecture (Prentice Hall)
What does the term Plain Old Java Object (POJO) mean? I couldn't find anything explanatory enough.
POJO's Wikipedia page says that POJO is an ordinary Java Object and not a special object. Now, what makes or what doesn't make and object special in Java?
The above page also says that a POJO should not have to extend prespecified classes, implement prespecified Interfaces or contain prespecified Annotations. Does that also mean that POJOs are not allowed to implement interfaces like
Serializable,
Comparable or classes like Applets or any other user-written Class/Interfaces?
Also, does the above policy (no extending, no implementing) means that we are not allowed to use any external libraries?
Where exactly are POJOs used?
EDIT: To be more specific, am I allowed to extend/implement classes/interfaces that are part of the Java or any external libraries?
What does the term Plain Old Java Object (POJO) mean?
POJO was coined by Martin Fowler, Rebecca Parsons and Josh Mackenzie when they were preparing for a talk at a conference in September 2000. Martin Fowler in Patterns of Enterprise Application Architecture explains how to implement a Domain Model pattern in Java. After enumerating some of disadvantages of using EJB Entity Beans:
There's always a lot of heat generated when people talk about developing a Domain Model in J2EE. Many of the teaching materials and introductory J2EE books suggest that you use entity beans to develop a domain model, but there are some serious problems with this approach, at least with the current (2.0) specification.
Entity beans are most useful when you use Container Managed Persistence (CMP)...
Entity beans can't be re-entrant. That is, if you call out from one entity bean into another object, that other object (or any object it calls) can't call back into the first entity bean...
...If you have remote objects with fine-grained interfaces you get terrible performance...
To run with entity beans you need a container and a database connected. This will increase build times and also increase the time to do test runs since the tests have to execute against a database. Entity beans are also tricky to debug.
As an alternative, he proposed to use Regular Java Objects for Domain Model implementation:
The alternative is to use normal Java objects, although this often causes a surprised reaction—it's amazing how many people think that you can't run regular Java objects in an EJB container. I've come to the conclusion that people forget about regular Java objects because they haven't got a fancy name. That's why, while preparing for a talk in 2000, Rebecca Parsons, Josh Mackenzie, and I gave them one: POJOs (plain old Java objects). A POJO domain model is easy to put together, is quick to build, can run and test outside an EJB container, and is independent of EJB (maybe that's why EJB vendors don't encourage you to use them).
The main web application of my company is crying out for a nifty set of libraries to make it in some way maintainable and scalable, and one of my colleagues has suggested CSLA. So I've bought the book but as :
programmers don't read books anymore
I wanted to gauge the SOFlow community's opinion of it.
So here are my questions:
Before I specifically answer your question, I'd like to put a few thoughts down. Is CSLA right for your project? It depends. I would personally consider CSLA for desktop based applications that does not value unit testing as a high priority. CSLA is great if you want to easily scale to an n-tier application. CSLA tends to get some flack because it does not allow pure unit testing. This is true, however like anything in technology, I believe that there is No One True Way. Unit testing may not be something you are undertaking for a specific project. What works for one team and one project may not work for another team or other project.
There are also many misconceptions in regards to CSLA. It is not an ORM. it is not a competitor to NHibernate (in fact using CLSA Business Objects & NHibernate as data access fit really well together). It formalises the concept of a Mobile Object.
1. How many people are using CSLA?
Based on the CSLA Forums, I would say there are quite a number of CSLA based projects out there. Honestly though, I have no idea how many people are actually using it. I have used it in the past on two projects.
2. What are the pros and cons?
While it is difficult to summarise in a short list, here is some of the pro/con's that come to mind.
Pros:
Cons:
3. After reading this does CSLA really not fit in with TDD?
I haven't found an effective way to do TDD with CSLA. That said, I am sure there are many smarter people out there than me that may have tried this with greater success.
4. What are my alternatives?
Domain-Driven-Design is getting big push at the moment (and rightfully so - it's fantastic for some applications). There are also a number of interesting patterns developing from the introduction of LINQ (and LINQ to SQL, Entity Framework, etc). Fowlers book PoEAA, details many patterns that may be suitable for your application. Note that some patterns are competing (i.e. Active Record and Repository), and thus are meant to be used for specific scenarios. While CSLA doesn't exactly match any of the patterns described in that book, it most closely resembles Active Record (although I feel it is short-sighted to claim an exact match for this pattern).
5. If you have stopped using it or decided against why?
I didn't fully recommend CSLA for my last project, because I believe the scope of the application is too large for the benefits CSLA provides.
I would not use CSLA on a web project. I feel there are other technologies better suited to building applications in that environment.
In summary, while CSLA is anything but a silver bullet, it is appropriate for some scenarios.
Hope this helps!.
We implement an One-to-Many relationship by adding one Table's PK, as FK to the other Table. We implement a Many-to-Many relationship by adding 2 Table's PKs to a third Table.
How do we implement an IS-A Relationship ?
The Entities are TECHNICIAN and ADMINISTRATIVE which both are EMPLOYEE. I could just use an extra field in the Table EMPLOYEE(id, name, surname, role, ...AdminFields..., ...TechFields...)
but i would like to explore the IS-A option.
EDIT: I did as Donnie suggested, but without the role field.
If you have an OO application that you need to connect to a relational back-end database, I'd recommend getting Martin Fowler's Patterns of Enterprise Application Architecture.
He also has some relevant notes and diagrams on his website. Specifically, the patterns Single Table Inheritance, Class Table Inheritance and Concrete Table Inheritance describe three tactics for mapping IS-A in data tables.
If you're using Hibernate or JPA, they support mappings for all of these, though they have different names for them.
In this specific instance, I wouldn't use IS-A at all though.
Things like employee roles are better modeled as HAS-A, as.
I wrote a class that tests for equality, less than, and greater than with two doubles in Java. My general case is comparing price that can have an accuracy of a half cent. 59.005 compared to 59.395. Is the epsilon I chose adequate for those cases?
private final static double EPSILON = 0.00001; /** * Returns true if two doubles are considered equal. Tests if the absolute * difference between two doubles has a difference less then .00001. This * should be fine when comparing prices, because prices have a precision of * .001. * * @param a double to compare. * @param b double to compare. * @return true true if two doubles are considered equal. */ public static boolean equals(double a, double b){ return a == b ? true : Math.abs(a - b) < EPSILON; } /** * Returns true if two doubles are considered equal. Tests if the absolute * difference between the two doubles has a difference less then a given * double (epsilon). Determining the given epsilon is highly dependant on the * precision of the doubles that are being compared. * * @param a double to compare. * @param b double to compare * @param epsilon double which is compared to the absolute difference of two * doubles to determine if they are equal. * @return true if a is considered equal to b. */ public static boolean equals(double a, double b, double epsilon){ return a == b ? true : Math.abs(a - b) < epsilon; } /** * Returns true if the first double is considered greater than the second * double. Test if the difference of first minus second is greater then * .00001. This should be fine when comparing prices, because prices have a * precision of .001. * * @param a first double * @param b second double * @return true if the first double is considered greater than the second * double */ public static boolean greaterThan(double a, double b){ return greaterThan(a, b, EPSILON); } /** * Returns true if the first double is considered greater than the second * double. Test if the difference of first minus second is greater then * a given double (epsilon). Determining the given epsilon is highly * dependant on the precision of the doubles that are being compared. * * @param a first double * @param b second double * @return true if the first double is considered greater than the second * double */ public static boolean greaterThan(double a, double b, double epsilon){ return a - b > epsilon; } /** * Returns true if the first double is considered less than the second * double. Test if the difference of second minus first is greater then * .00001. This should be fine when comparing prices, because prices have a * precision of .001. * * @param a first double * @param b second double * @return true if the first double is considered less than the second * double */ public static boolean lessThan(double a, double b){ return lessThan(a, b, EPSILON); } /** * Returns true if the first double is considered less than the second * double. Test if the difference of second minus first is greater then * a given double (epsilon). Determining the given epsilon is highly * dependant on the precision of the doubles that are being compared. * * @param a first double * @param b second double * @return true if the first double is considered less than the second * double */ public static boolean lessThan(double a, double b, double epsilon){ return b - a > epsilon; }
If you are dealing with money I suggest checking the Money design pattern (originally from Martin Fowler's book on enterprise architectural design).
I suggest reading this link for the motivation:
There's a lot of sites out there that teach people how to build better software--but why is it that there are very few sites that actually give detailed descriptions of the domains that we (as programmers) are supposed to create? One can only construct so many inventory, accounting, and ERP systems before a pattern of common requirements start to emerge among the different types of systems. Logically speaking, if programmers spend so much time trying to create reusable components in their architectures, does that imply that they should have to have some reusable "blueprint" that describes the systems that they're supposed to create? In other words, it seems like the focus of software development has been too focused on "how" software should be built rather than to catalog and accurately specify (with detailed requirements) "what" should be used in the first place.
So my question is this: Has there been any work done to catalog all the different types of system specifications into a single place, all on a single site? If lacking the proper requirements at the start of the project is one of the banes of software development, wouldn't it make more sense to be able to 'reuse' requirement specifications from previous systems of the same type that have already been written?
It approaches reusable design from the data model point of view, as opposed to end user requirements or OO designs. However, I find that to be very useful - once you have a good grasp of the data model, you have a big jump on the requirements and the entities that will eventually be modeled as classes.
You might want to check out Martin Fowler's Patterns of Enterprise Application Architecture - while not specs, it seems to be about the sort of things you are after.
Disclaimer: I haven't read it myself, I only know of its existence.
What's the penetration of design patterns in the real world? Do you use them in your day to day job - discussing how and where to apply them with your coworkers - or do they remain more of an academic concept?
Do they actually provide actual value to your job? Or are they just something that people talk about to sound smart?
Note: For the purpose of this question ignore 'simple' design patterns like Singleton. I'm talking about designing your code so you can take advantage of Model View Controller, etc.
I absolutely use design patterns. At this point I take MVC for granted as a design pattern. My primary reason for using them is that I am humble enough to know that I am likely not the first person to encounter a particular problem. I rarely start a piece of code knowing which pattern I am going to use; I constantly watch the code to see if it naturally develops into an existing pattern.
I am also very fond of Martin Fowler's Patterns of Enterprise Application Architecture. When a problem or task presents itself, I flip to related section (it's mostly a reference book) and read a few overviews of the patterns. Once I have a better idea of the general problem and the existing solutions, I begin to see the long term path my code will likely take via the experience of others. I end up making much better decisions.
Design patterns definitely play a big role in all of my "for the future".
Is there a Java Application Architecture Guide that is a counterpart of this: ?
The following should be helpful to you
Although, having had a quick glance at the document from codeplex, I can tell you that probably 70-80% of what is in there, applies to Java as well..
Up until now I've been using Active records in all my c# database driven applications. But now my application requires my persistence code being split from my business objects. I have read a lot of posts regarding Martin Fowler's data mapping pattern, but my knowledge of this pattern is still very limited.
Let's use the following example:
If I have 2 tables - Customer and CustomerParameters. The CustomerParameters table contains default Customer values for creating a new Customer.
I will then have to create a CustomersMapper class to handle all of the Customer persistence. My Customer and CustomersList class will then collaborate with this mapper class in order to persist customer data.
I have the following questions:
How would I transfer raw data TO & FROM my Customer class to the mapper without breaking certain business rules? DTO's?
Is it acceptable to have a SaveAll and LoadAll method in my Mapper class for updating and loading multiple customers' data? If so, in case of SaveAll, how will the mapper know when to update or insert data?
Will the Customer mapper class be responsible for retrieving the default values from the CustomerParameters table as well, or will it be better to create a CustomerParameters mapper?
A O/R mapper tool is not really here. The database I'm using is Transactional and requires that I write my own Mapper Pattern.
Any ideas and comments will be greatly appreciated.
Shaun I would answer your questions this way:
ad 1) Mapper is responsible for creating Customer object. Your Mapper object will have something like RetrieveById method (for example). It will accept an ID and somehow (that't he responsibility of the Mapper object) construct the valid Customer object. The same is true the other way. When you call Mapper.Update method with a valid Customer object, the Mapper object is responsible for making sure that all the relevant data are persisted (wherever appropriate - db, memory, file, etc.)
ad 2) As I noted above retrieve/persist are methods on Mapper object. It is its responsibility to provide such a functionality. Therefore LoadAll, SaveAll (probably passing an array of value objects) are valid Mapper methods.
ad 3) I would say yes. But you can separate various aspects of Mapper objects into separate classes (if you want to/need to): default values, rule validation, etc.
I hope it helps. I really suggest/recommend you to read Martin Fowler's book Patterns of Enterprise Application Architecture.
Consider a Database interaction module written in PHP that contains classes for interacting with the database. I have not started coding the class so I won't be able to give code snippets..
In addition, I am thinking of creating another class as follows: -
DatabaseHelper : A class that will have a member that represents the connection to the database. This class will contain the lower level methods for executing SQL queries such as executeQuery(query,parameters), executeUpdate(query,parameters) and so on.
At this point, I have two options to use the DatabaseHelper class in other classes : -.
Edit:?
Lets answer your questions from top to bottom, and see what I can add to what you say..
Essentially you have to choices here. The method you described is called the active record pattern. The object itself knows how and where it is stored. For simple objects that interact with a database to create / read / update / delete, this pattern is really usefull.
If the database operations become more extensive and less simple to understand, it is often a good choice to go with a data mapper (eg. this implementation). This is a second object that handles all the database interactions, while the object itself (eg. User or Location) only handles operations that are specific to that object (eg. login or goToLocation). If you ever want to chance the storage of your objects, you will only have to create a new data mapper. Your object won't even know that something changed in the implementation. This enforces encapsulation and seperation of concerns.
There are other options, but these two are the most used ways to implement database interactions.
In addition, I am thinking of creating another class as follows: -
DatabaseHelper : A class that will have a static member that represents the connection to the database. This class will contain the lower level methods for executing SQL queries such as executeQuery(query,parameters), executeUpdate(query,parameters) and so on.
What you are describing here sounds like a singleton. Normally this isn't really a good design choice. Are you really, really certain that there will never be a second database? Probably not, so you should not confine yourself to an implementation that only allowes for one database connection. Instead of making a DatabaseHelper with static members, you can better create a Database object with some methods that allow you to connect, disconnect, execute a query, etc. This way you can reuse it if you ever need a second connection.
At this point, I have two options to use the DatabaseHelper class in other classes : -
- The User and Locations class will extend the DatabaseHelper class so that they can use the inherited executeQuery and executeUpdate methods in DatabaseHelper. In this case, DatabaseHelper will ensure that there is only one instance of the connection to the database at any given time.
- The DatabaseHelper class will be injected in the User and Locations class through a Container class that will make User and Location instances. In this case, the Container will make sure that there is only one instance of DatabaseHelper in the application at any given time..
The first option isn't really viable. If you read the description of inheritance, you will see that inheritance is normally used to create a subtype of an existing object. An User is not a subtype of a DatabaseHelper, nor is a location. A MysqlDatabase would be a subtype of a Database, or a Admin would be a subtype of an User. I would advise against this option, as it isn't following the best practices of object oriented programming.
The second option is better. If you choose to use the active record method, you should indeed inject the Database into the User and Location objects. This should of course be done by some third object that handles all these kind of interactions. You will probably want to take a look at dependency injection and inversion of control.
Otherwise, if you choose the data mapper method, you should inject the Database into the data mapper. This way it is still possible to use several databases, while seperating all your concerns.
For more information about the active record pattern and the data mapper pattern, I would advise you to get the Patterns of Enterprise Application Architecture book of Martin Fowler. It is full of these kind of patterns and much, much more!
I hope this helps (and sorry if there are some really bad English sentences in there, I'm not a native speaker!).
== EDIT ==
Using the active record pattern of data mapper pattern also helps in testing your code (like Aurel said). If you seperate all peaces of code to do just one thing, it will be easier to check that it is really doing this one thing. By using PHPUnit (or some other testing framework) to check that your code is properly working, you can be pretty sure that no bugs will be present in each of your code units. If you mix up the concerns (like when you choose option 1 of your choices), this will be a whole lot harder. Things get pretty mixed up, and you will soon get a big bunch of spaghetti code.
== EDIT2 ==
An example of the active record pattern (that is pretty lazy, and not really active):
class Controller { public function main() { $database = new Database('host', 'username', 'password'); $database->selectDatabase('database'); $user = new User($database); $user->name = 'Test'; $user->insert(); $otherUser = new User($database, 5); $otherUser->delete(); } } class Database { protected $connection = null; public function __construct($host, $username, $password) { // Connect to database and set $this->connection } public function selectDatabase($database) { // Set the database on the current connection } public function execute($query) { // Execute the given query } } class User { protected $database = null; protected $id = 0; protected $name = ''; // Add database on creation and get the user with the given id public function __construct($database, $id = 0) { $this->database = $database; if ($id != 0) { $this->load($id); } } // Get the user with the given ID public function load($id) { $sql = 'SELECT * FROM users WHERE id = ' . $this->database->escape($id); $result = $this->database->execute($sql); $this->id = $result['id']; $this->name = $result['name']; } // Insert this user into the database public function insert() { $sql = 'INSERT INTO users (name) VALUES ("' . $this->database->escape($this->name) . '")'; $this->database->execute($sql); } // Update this user public function update() { $sql = 'UPDATE users SET name = "' . $this->database->escape($this->name) . '" WHERE id = ' . $this->database->escape($this->id); $this->database->execute($sql); } // Delete this user public function delete() { $sql = 'DELETE FROM users WHERE id = ' . $this->database->escape($this->id); $this->database->execute($sql); } // Other method of this user public function login() {} public function logout() {} }
And an example of the data mapper pattern:
class Controller { public function main() { $database = new Database('host', 'username', 'password'); $database->selectDatabase('database'); $userMapper = new UserMapper($database); $user = $userMapper->get(0); $user->name = 'Test'; $userMapper->insert($user); $otherUser = UserMapper(5); $userMapper->delete($otherUser); } } class Database { protected $connection = null; public function __construct($host, $username, $password) { // Connect to database and set $this->connection } public function selectDatabase($database) { // Set the database on the current connection } public function execute($query) { // Execute the given query } } class UserMapper { protected $database = null; // Add database on creation public function __construct($database) { $this->database = $database; } // Get the user with the given ID public function get($id) { $user = new User(); if ($id != 0) { $sql = 'SELECT * FROM users WHERE id = ' . $this->database->escape($id); $result = $this->database->execute($sql); $user->id = $result['id']; $user->name = $result['name']; } return $user; } // Insert the given user public function insert($user) { $sql = 'INSERT INTO users (name) VALUES ("' . $this->database->escape($user->name) . '")'; $this->database->execute($sql); } // Update the given user public function update($user) { $sql = 'UPDATE users SET name = "' . $this->database->escape($user->name) . '" WHERE id = ' . $this->database->escape($user->id); $this->database->execute($sql); } // Delete the given user public function delete($user) { $sql = 'DELETE FROM users WHERE id = ' . $this->database->escape($user->id); $this->database->execute($sql); } } class User { public $id = 0; public $name = ''; // Other method of this user public function login() {} public function logout() {} }
== EDIT 3: after edit by bot ==?
I think there is no need for any static property, nor does the Container need those makeUser of makeLocation methods. Lets assume that you have some entry point of your application, in which you create a class that will control all flow in your application. You seem to call it a container, I prefer to call it a controller. After all, it controls what happens in your application.
$controller = new Controller();
The controller will have to know what database it has to load, and if there is one single database or multiple ones. For example, one database contains the user data, anonther database contains the location data. If the active record User from above and a similar Location class are given, then the controller might look as follows:
class Controller { protected $databases = array(); public function __construct() { $this->database['first_db'] = new Database('first_host', 'first_username', 'first_password'); $this->database['first_db']->selectDatabase('first_database'); $this->database['second_db'] = new Database('second_host', 'second_username', 'second_password'); $this->database['second_db']->selectDatabase('second_database'); } public function showUserAndLocation() { $user = new User($this->databases['first_database'], 3); $location = $user->getLocation($this->databases['second_database']); echo 'User ' . $user->name . ' is at location ' . $location->name; } public function showLocation() { $location = new Location($this->database['second_database'], 5); echo 'The location ' . $location->name . ' is ' . $location->description; } }
Probably it would be good to move all the echo's to a View class or something. If you have multiple controller classes, it might pay off to have a different entrypoint that creates all databases and pushes them in the controller. You could for example call this a front controller or an entry controller.
Does this answer you open questions?
Does anyone know of an already implemented money type for the .NET framework that supports i18n (currencies, formatting, etc)? I have been looking for a well implemented type and can't seem to find one.
i would use integer/long, and use a very low denomination like cents (or pence) - then there would be no decimal to work with, and all calculations can be rounded to the nearest cent.
or, take a look at Martin Fowler's book "Patterns of Enterprise Application Architecture". In that book, he talked about how to implement a money class.
Can someone explain these 3 concepts and the differences between them with respect to an MVC framework along with an example. To me these appear almost equivalent, and it seems they are used interchangeably in some articles and not in others.
The terms you are confused about are: "domain objects", "domain entities" and "model objects". While usually used interchangeably, the domain entities and model object can also be instances of active record pattern (basically: domain objects with added storage logic).
In ordinary domain object there is no storage logic. It gets handled by data mappers.
The term "model objects" comes from Fowler's books (read PoEAA for more details), and, IMHO, is part of the confusions MVC, because the entire model is an application layer (MVC consists of it and presentation layer), which contains those "model objects", which are usually dealt with by services (in that image, the model layer is all three concentric circles together).
I much prefer to use "domain object" term instead.
The term "domain entity" (or "entity object") is usually used when author implies that the object is a direct representation of a storage structure (more often - a database table). These are also almost always implementations of active record.
P.S.: in some articles you would also see term "models" (plural). It usually is not directly related to MVC design pattern, because it talks about Rails-like architecture, where "models" are just active records, that get directly exposed-to/created-by controller.
.. not sure whether this confused you more'm.
I have to design a Data Access Layer with .NET that probably will use more than one database management system (Mysql and Sql Server) with the same relational design.
Basically, it has to be simple to switch from one database to another so I would like you to recommend me some web-sites or books that has been useful for you, with common design patterns or information in general to implement this kind of data access layer.
Thank you.
In general, I second John Nolan's recommendation of Patterns of Enterprise Application Architecture.
More specifically, I would always recommend that you hide your Data Access Layer behind an interface and use Dependency Injection to inject a particular Data Access Component into your Domain Logic at run-time.
You can use a Dependency Injection Container or do it manually.
On the technology side I would recommend Microsoft's Entity Framework, since your data access needs seem to be constrained to relational databases. The Entity Framework is Microsoft's official OR/M and it has providers for many different RDBMSs, as well as LINQ support.
I'm really struggling with a recurring OOP / database concept.
Please allow me to explain the issue with pseudo-PHP-code.
Say you have a "user" class, which loads its data from the
users table in its constructor:
class User { public $name; public $height; public function __construct($user_id) { $result = Query the database where the `users` table has `user_id` of $user_id $this->name= $result['name']; $this->height = $result['height']; } }
Simple, awesome.
Now, we have a "group" class, which loads its data from the
groups table joined with the
groups_users table and creates
user objects from the returned
user_ids:
class Group { public $type; public $schedule; public $users; public function __construct($group_id) { $result = Query the `groups` table, joining the `groups_users` table, where `group_id` = $group_id $this->type = $result['type']; $this->schedule = $result['schedule']; foreach ($result['user_ids'] as $user_id) { // Make the user objects $users[] = new User($user_id); } } }
A group can have any number of users.
Beautiful, elegant, amazing... on paper. In reality, however, making a new group object...
$group = new Group(21); // Get the 21st group, which happens to have 4 users
...performs 5 queries instead of 1. (1 for the group and 1 for each user.) And worse, if I make a
community class, which has many groups in it that each have many users within them, an ungodly number of queries are ran!
The Solution, Which Doesn't Sit Right To Me
For years, the way I've got around this, is to not code in the above fashion, but instead, when making a
group for instance, I would join the
groups table to the
groups_users table to the
users table as well and create an array of user-object-like arrays within the
group object (never using/touching the
user class):
class Group { public $type; public $schedule; public $users; public function __construct($group_id) { $result = Query the `groups` table, joining the `groups_users` table, **and also joining the `users` table,** where `group_id` = $group_id $this->type = $result['type']; $this->schedule = $result['schedule']; foreach ($result['users'] as $user) { // Make user arrays $users[] = array_of_user_data_crafted_from_the_query_result; } } }
...but then, of course, if I make a "community" class, in its constructor I'll need to join the
communities table with the
communities_groups table with the
groups table with the
groups_users table with the
users table.
...and if I make a "city" class, in its constructor I'll need to join the
cities table with the
cities_communities table with the
communities table with the
communities_groups table with the
groups table with the
groups_users table with the
users table.
What an unmitigated disaster!
Do I have to choose between beautiful OOP code with a million queries VS. 1 query and writing these joins by hand for every single superset? Is there no system that automates this?
I'm using CodeIgniter, and looking into countless other MVC's, and projects that were built in them, and cannot find a single good example of anyone using models without resorting to one of the two flawed methods I've outlined.
It appears this has never been done before.
One of my coworkers is writing a framework that does exactly this - you create a class that includes a model of your data. Other, higher models can include that single model, and it crafts and automates the table joins to create the higher model that includes object instantiations of the lower model, all in a single query. He claims he's never seen a framework or system for doing this before, either.
Please Note: I do indeed always use separate classes for logic and persistence. (VOs and DAOs - this is the entire point of MVCs). I have merely combined the two in this thought-experiment, outside of an MVC-like architecture, for simplicity's sake. Rest assured that this issue persists regardless of the separation of logic and persistence. I believe this article, introduced to me by James in the comments below this question, seems to indicate that my proposed solution (which I've been following for years) is, in fact, what developers currently do to solve this issue. This question is, however, attempting to find ways of automating that exact solution, so it doesn't always need to be coded by hand for every superset. From what I can see, this has never been done in PHP before, and my coworker's framework will be the first to do so, unless someone can point me towards one that does.
And, also, of course I never load data in constructors, and I only call the load() methods that I create when I actually need the data. However, that is unrelated to this issue, as in this thought experiment (and in the real-life situations where I need to automate this), I always need to eager-load the data of all subsets of children as far down the line as it goes, and not lazy-load them at some future point in time as needed. The thought experiment is concise -- that it doesn't follow best practices is a moot point, and answers that attempt to address its layout are likewise missing the point.
EDIT : Here is a database schema, for clarity.
CREATE TABLE `groups` ( `group_id` int(11) NOT NULL, <-- Auto increment `make` varchar(20) NOT NULL, `model` varchar(20) NOT NULL ) CREATE TABLE `groups_users` ( <-- Relational table (many users to one group) `group_id` int(11) NOT NULL, `user_id` int(11) NOT NULL ) CREATE TABLE `users` ( `user_id` int(11) NOT NULL, <-- Auto increment `name` varchar(20) NOT NULL, `height` int(11) NOT NULL, )
(Also note that I originally used the concepts of
wheels and
cars, but that was foolish, and this example is much clearer.)
SOLUTION:
I ended up finding a PHP ORM that does exactly this. It is Laravel's Eloquent. You can specify the relationships between your models, and it intelligently builds optimized queries for eager loading using syntax like this:
Group::with('users')->get();
It is an absolute life saver. I haven't had to write a single query. It also doesn't work using joins, it intelligently compiles and selects based on foreign keys.
Say you have a "wheel" class, which loads its data from the wheels table in its constructor
Constructors should not be doing any work. Instead they should contain only assignments. Otherwise you make it very hard to test the behavior of the instance.
Now, we have a "car" class, which loads its data from the cars table joined with the cars_wheels table and creates wheel objects from the returned wheel_ids:
No. There are two problems with this.
Your
Car class should not contain both code for implementing "car logic" and "persistence logic". Otherwise you are breaking SRP. And wheels are a dependency for the class, which means that the wheels should be injected as parameter for the constructor (most likely - as a collection of wheels, or maybe an array).
Instead you should have a mapper class, which can retrieve data from database and store it in the
WheelCollection instance. And a mapper for car, which will store data in
Car instance.
$car = new Car; $car->setId( 42 ); $mapper = new CarMapper( $pdo ); if ( $mapper->fetch($car) ) //if there was a car in DB { $wheels = new WheelCollection; $otherMapper = new WheelMapper( $pdo ); $car->addWheels( $wheels ); $wheels->setType($car->getWheelType()); // I am not a mechanic. There is probably some name for describing // wheels that a car can use $otherMapper->fetch( $wheels ); }
Something like this. The mapper in this case are responsible for performing the queries. And you can have several source for them, for example: have one mapper that checks the cache and only, if that fails, pull data from SQL.
Do I really have to choose between beautiful OOP code with a million queries VS. 1 query and disgusting, un-OOP code?
No, the ugliness comes from fact that active record pattern is only meant for the simplest of usecases (where there is almost no logic associated, glorified value-objects with persistence). For any non-trivial situation it is preferable to apply data mapper pattern.
..and if I make a "city" class, in its constructor I'll need to join the cities table with the cities_dealerships table with the dealerships table with the dealerships_cars table with the cars table with the cars_wheels table with the wheels table.
Jut because you need data about "available cares per dealership in Moscow" does not mean that you need to create
Car instances, and you definitely will not care about wheels there. Different parts of site will have different scale at which they operate.
The other thing is that you should stop thinking of classes as table abstractions. There is no rule that says "you must have 1:1 relation between classes and tables".
Take the
Car example again. If you look at it, having separate
Wheel (or even
WheelSet) class is just stupid. Instead you should just have a
Car class which already contains all it's parts.
$car = new Car; $car->setId( 616 ); $mapper = new CarMapper( $cache ); $mapper->fetch( $car );
The mapper can easily fetch data not only from "Cars" table but also from "Wheel" and "Engines" and other tables and populate the
$car object.
P.S.: also, if you care about code quality, you should start reading PoEAA book. Or at least start watching lectures listed here.
my 2 cents.
I've seen various MVC frameworks as well as standalone ORM frameworks for PHP, as well as other ORM questions here; however, most of the questions ask for existing frameworks to get started with, which is not what I'm looking for. (I have also read this SO question, but I'm not sure what to make of it as the answers are vague.)
Instead, I figured I'd learn best by getting my hands dirty and actually writing my own ORM, even a simple one. Except I don't really know how to get started, especially since the code I see in other ORMs is so complicated.
With my PHP 5.2.x (this is important) MVC framework I have a basic custom database abstraction layer, that has:
connect($host, $user, $pass, $base),
query($sql, $binds), etc
But does not have:
EDIT: to clarify, I only have a database abstraction layer. I don't have models yet, but when I implement them I want them to be native ORM models (so to speak), hence this question.
I've read up a little about ORM, and from my understanding they provide a means to further abstract data models from the database itself by representing data as nothing more than PHP-based classes/objects; again, correct me if I am wrong or have missed out in any way.
Still, I'd like some simple tips from anyone else who's dabbled more or less with ORM frameworks. Is there anything else I need to take note of, simple, academic samples for me to refer to, or resources I can read?
As this question is rather old, I guess you already have had your try at writing an ORM yourself. Nonetheless, as I wrote a custom ORM two years ago, I would still like to share my experience and ideas.
As said I implemented a custom ORM two years ago and even used it with some success in small to medium sized projects. I integrated it in a rather popular CMS which at that time (and even now) lacks such ORM functionality. Furthermore, back then, popular frameworks like Doctrine didn´t really convince me. Much has changed since then and Doctrine 2 evolved in a solid framework, so, if I now had the choice between implementing my own ORM or using one of the popular frameworks like Doctrine 2 for production use, this would be no question at all - use the existing, stable solutions. BUT: implementing such a framework (in a simple manner) was a very valuable learning exercise and it helped me a lot in working with larger open source ORMs, as you get a better understanding for the pitfalls and difficulties associated with object relational mapping.
It is not too difficult to implement basic ORM functionality, but as soon as mapping of relationships between objects come into play, it gets much, much more difficult/interesting.
How did I get started?
What got me hooked was Martin Fowlers book Patterns of Enterprise Application Architecture. If you want to program your own ORM or even if you are just working with some ORM framework, buy this book. It is one of the most valuable resources that cover many of the basic and advanced techniques regarding the field of object relational mapping. Read up on it, you get many great ideas on the patterns behind a ORM.
Basic Architecture
I decided if I would like to use rather an Active Record approach or some kind of Data Mapper. This decision influences on how the data from the database is mapped to the entity. I decided to implement a simple Data Mapper, the same approach as Doctrine 2 or Hibernate in Java uses. Active Record is the approach of the ORM functionality (if you can call it so) in Zend Framework. Active Record is much simpler then a Data Mapper, but also much more limited. Read up on these patterns and check the mentioned frameworks, you get the difference pretty fast. If you decide to go with a Data Mapper, you should also read up on PHPs reflection API.
Querying
I had the ambitious goal to create my own query language, much like DQL in Doctrine or HQL in Hibernate. I soon abondoned that, as writing a custom SQL parser/lexer seemed way to complicated (and it really is!). What I did was to implement a Query Object, in order to encapsulate the information which table is involved in the query (thats important as you need to map the data from the database to the relevant classes for each table).
Querying for an object in my ORM looked like this:
public function findCountryByUid($countryUid) { $queryObject = new QueryObject(); $queryObject->addSelectFields(new SelectFields('countries', '*')) ->addTable(new Table('countries')) ->addWhere('countries.uid = "' . intval($countryUid) . '"'); $res = $this->findByQuery($queryObject); return $res->getSingleResult(); }
Configuration
Normally, you also need to have some kind of configuration format, Hibernate uses XML (among others), Doctrine 2 uses PHP annotations, EZComponents uses PHP arrays in its Persistent Object component as config format. Thats what I used, too, it seemed like a natural choice and the CMS I worked with used the PHP configuration format, too.
With that configuration, you define
And thats the information you use in your Data Mapper to map the DB result to objects.
Implementation
I decided to go with a strong test driven approach, because of the complex nature of writing a custom ORM. TDD or not, writing many, many unit tests is a really good idea on such a project. Apart from that: get your hands dirty and keep Fowlers book close. ;-)
As I said it was really worth the effort, but I wouldn´t want to do it again, much because of the mature frameworks that exist nowadays.
I don´t use my ORM anymore, it worked, but lacked many features, among others: lazy loading, component mapping, transaction support, caching, custom types, prepared statements/parameters etc. And it´s performance wasn´t good enough for using it in large scale projects.
Nonetheless, I hope I could give you some starting points in the field of ORM, if you didn´t knew them already. ;-)
I?
Try Applying Domain Driven Design and Patterns By Jimmy Nillson. It covers DDD and it's patterns in .NET.
I'm currently interested in how to architecture good .NET applications and I'm reading or have currently read some of the following books:
Those two Microsoft books really explain how to design .NET applications with high testability using Inversion Of Control and such.
And to be clear, yes they all use design patterns common in TDD, DDD, Dependency Injection, ans so on...
For your needs I would recommend starting with:
Like the title says; it's basically a book on how to to DDD and TDD in a .NET environment.
Here are a few that I would recommend:
I'm working on a project at the moment that has a rather unusual requirement and I'm hoping to get some advice on the best way to handle it or even some pointers to info that can help me build a solution.
Ok, so this is what I need to do. The application stores and manages various types of media files but each deployment of the application has completely different metadata requirements for the media files.
This metadata can contain an arbitrary number of fields of different types (single line text, multi-line text, checkboxes, selected values, etc.) and also often requires validation particularly presence and uniqueness validations.
The application needs to be able to easily retrieve values and most importantly has to be able to handle full searching capabilities on these fields.
One option I considered was using a property list arrangement where the database table simply contained a property name and value for each metadata field of each media file. However, when prototyping this solution it quickly became apparent that it simply wasn't going to be efficient enough for the searching and retrieval of records particularly when the database can be reasonably large e.g. a recent deployment had 3000 media files and there were over 20 metadata fields. Also, the queries to do a search and retrieve the relevant records quickly became very complex.
Another option that the system is currently using is that the metadata config is defined upfront and a migration is run during deployment to create a the table and model with a standard name so that the media model can be associated with it which the system then uses. This generally works pretty fine but it does cause some significant deployment and testing issues.
For example, writing unit tests becomes much more challenging when you don't know the config until deployment. Although I could write a sample config and test the code that way, it won't allow me to test the specific requirements of a particular deployment.
Similarly, in development, it currently requires me to copy a migration from the config into the main folder, run it, do all of my testing and development and then I have to remember to rollback and remove that migration from the main folder so that the application is in a standard state. This particularly becomes challenging when I'm bug fixing and I need to have the application in a specific configuration for testing and debugging purposes. Trying to switch between the various configurations becomes a real nightmare.
Ideally, what I would like is to be able to dynamically create the table and model including validations, etc. from a config file when the server is started. Even better would be if I could maintain multiple metadata setups in the one database with each one having its own table so that all I need to do to switch between them is change which config file the application is currently using.
I'm sure this can be done with Rails but there is very little information that I've been able to find that can point me in the right direction of how to build it during my research over the past few days so any help or suggestions would be much appreciated!
If I understand you correctly, Rails has some nifty tricks to help you solve these problems.
In the ActiveRecord ORM it's possible to model what you're trying to do in a relational database, either using the single table inheritance pattern, or with polymorphic associations (...a bit more complex but more flexible too). A polymorphic association allows a model to belong_to different types of other models. There's a recent railscast on this topic but I won't link to it since it requires a paid subscription.
On the deployment side, it sounds like you're doing a lot of things manually, which is the right way to start until a pattern emerges. Once you start seeing the pattern, there are excellent programs available for configuration, build, and deployment automation such as Capistrano, OpsCode Chef, and Puppet, to name just a few. You might also benefit from integrating your configuration and deployment with your source code repository to achieve a better workflow. For example, with Git you could define topic branches for the various media file type and have a different configuration in each branch that matches the topic branch.
You may want to check out Martin Fowler's excellent book 'PoEAA' and some of the topics on his website. I hope this answer helps even though the answer is pretty generic. Your question is very broad and does not have one simple answer.'m building a tiny MVC framework for learning/experimenting and small project purposes. I needed to find out the basics of the internals of the Model since a full MVC framework and ORM is overkill for just a few database calls.
Class Model { }
Using an empty class where would I have to call a
new PDO object for database calls?
What would calling a query look like inside the Model?
Additionally, where can I find a beginner's web/book resource to MVC (with lots of example code)? I've heard a lot of terms such as business logic and database logic. I remember reading somewhere that you should separate business logic and database logic. I can understand the concept somewhat, I just wonder what it looks like or what they mean in code itself. I'm confused how business logic and database logic should be separated but still be inside the Model.
I'm mostly looking for code/logic examples as answers, except maybe the latter paragraph.
Warning:
The information in this posts is extremely outdated. It represents my understanding of MVC pattern as it was more then 2 years ago. It will be updated when I get round to it. Probably this month (2013.09).
Model itself should not contain any SQL. Ever. It is meant to only contain domain business logic.
The approach i would recommend is to separate the responsibilities, which are not strictly "business logic" into two other other sets of constructs : Domain Objects and Data Mappers.
For example, if you are making a blog, then the Model will not be Post. Instead most likely the model will be Blog , and this model will deal with multiple
Domain Objects: multiple instances of Post, Comment, User and maybe other objects.
In your model, the domain objects should not know how to store themselves in database. Or even be aware of the existence of any form of storage. That is a responsibility of
Data Mappers. All you should do in the Model is to call
$mapper->store( $comment );. And the data mapper should know how to store one specific type of domain objects, and win which table to put the information ( usually the storage of of single domain object actually affects multiple tables ).
(only relevant fragments from files):
_in example is
protected
from
/application/bootstrap.php
/* --- snip --- */ $connection = new PDO( 'sqlite::memory:' ); $model_factory = new ModelFactory( $connection ); $controller = new SomeController( $request , $model_factory ); /* --- snip --- */ $controller->{$action}(); /* --- snip --- */
from
/framework/classes/ModelFactory.php
/* --- snip --- */ class ModelFactory implements ModelBuilderInterface { /* --- snip --- */ protected function _prepare() { if ( $this->_object_factory === null ) { $this->_object_factory = new DomainObjectFactory; } if ( $this->_mapper_factory === null ) { $this->_mapper_factory = new DataMapperFactory( $this->_connection ); } } public function build( $name ) { $this->_prepare(); return new {$name}( $this->_object_mapper , $this->_data_mapper ); } /* --- snip --- */ }
file
/application/controllers/SomeController.php
/* --- snip --- */ public function get_foobar() { $factory = $this->_model_factory; $view = $this->_view; $foo = $factory->build( 'FooModel' ); $bar = $factory->build( 'BarModel' ); $bar->set_language( $this->_request->get('lang') ); $view->bind( 'ergo' , $foo ); /* --- snip --- */ } /* --- snip --- */
file
/application/models/FooModel.php
/* --- snip --- */ public function find_something( $param , $filter ) { $something = $this->_object_factory('FooBar'); $mapper = $this->_mapper_factory('FooMapper'); $something->set_type( $param ); $mapper->use_filter( $filter )->fetch( $something ); return $something; } /* --- snip --- */
I hope this will help you understand the separation between DB logic and business logic ( and actually , presentation logic too )
Model should never extend Database or ORM, because Model is not a subset of them. By extending a class, you are declaring that has all the characteristics of the superclass, but with minor exceptions.
class Duck extends Bird{} class ForestDuck extends Duck{} // this is ok class Table extends Database{} class Person extends Table{} // this is kinda stupid and a bit insulting
Besides the obvious logic-issues, if your Model is tightly coupled with underlaying Database, it makes the code extremely hard to test (talking about Unit Testing (video)).
I personally think, that ORMs are useless and in large project - even harmful. Problem stems from the fact that ORMs are trying to bridge two completely different ways of approaching problems : OOP and SQL.
If you start project with ORM then, after short learning curve, you are able to write simple queries very fast. But by the time you start hitting the ORM's limitations and problems, you are already completely invested in the use of ORM ( maybe even new people were hired , who were really good at your chosen , but sucked at plain SQL ). You end up in situation where every new DB related issue take more and more time to solve. And if you have been using ORM based on ActiveRecord pattern, then the problems directly influence your Models.
Uncle Bob calls this "technical debt".
loosely related to subject
Recently I came about this concept of Design Patterns, and felt really enthusiastic about it. Can you guys suggest some resources that help me dive into Design Patterns?
You know, for me, one of the best books out there is Head First Design Patterns. I personally like the style that they use to communicate the material.
The gang of four design patterns book is the standard. I recommend it if you're comfortable with C++.
Head first design patterns is good too, especially if you like visual aids and want to feel like you're learning design patterns in a '50s diner. Examples are in Java.
There are sometimes multiple ways to implement patterns in a given programming language (for example see this discussion of options for implementing the Singleton pattern in C#), so it might be worth getting one book to succinctly describe the common patterns, and another to suggest the best way to implement them in your favorite language.
Martin Fowler's website has plenty of information:. Much of this is covered also in his book, Patterns of Enterprise Application Architecture.
I like these 2...
this one really helps with taking existing code and implementing a design pattern.
I find Design Patterns Explained to be a good introductory text. The Gang of Four book is a reference for those who already understand patterns.
Wikipedia, the Gang of Four book, and if you're specifically interested in C# implementations there's a decent site here.. have been using Rails for over 4 years so obviously I like Rails and like doing things the Rails Way and sometimes I unknowingly fall to the dark side.
I recently picked up Clean Code by Uncle Bob. I am on Chapter 6 and a bit confused whether we as rails developers break the very fundamental rule of OO design, i.e. Law of Demeter or encapsulation? The Law of Demeter states that an object should not know the innards of another object and it should not invoke methods on objects that are returned by a method because when you do that then it suggests one object knows too much about the other object.
But very often we call methods on another object from a model. For example, when we have a relationship like 'An order belongs to a user'. Then very often we end up doing order.user.name or to prevent it from looking like a train wreck we set up a delegate to do order.name.
Isn't that still like breaking the Law of Demeter or encapsulation ?
The other question is: is ActiveRecord just a Data Structure or Data Transfer Object that interfaces with the database?
If yes, then don't we create a Hybrid Structure, i.e. half object and half data structure by putting our business rules in ActiveRecord Models?
Yes, ActiveRecord deliberately breaks encapsulation. This is not so much a limitation of Rails as it is a limitation of the pattern it's based on. Martin Fowler, whose definition of ActiveRecord was pretty much the template Rails used, says as much in the ActiveRecord chapter of POEAA:
Another argument against Active Record is the fact that it couples the object design to the database design. This makes it more difficult to refactor either design as a project goes forward.
This is a common criticism of Rails from other frameworks. Fowler himself says ActiveRecord is mainly to be used
...for domain logic that isn't too complex...if your business logic is complex, you'll soon want to use your object's direct relationships, collections, inheritance and so forth. These don't map easily onto Active Record.
Fowler goes on to say that for more serious applications with complex domain logic the Data Mapper pattern, which does a better job of separating the layers, is preferable. This is one of the reasons that Rails upcoming move to Merb has been generally seen as a positive move for Rails, as Merb makes use of the DataMapper pattern in addition to ActiveRecord.
I'm not sure Demeter is the primary concern with ActiveRecord. Rather I think breaking encapsulation between the data and domain layers breaks Uncle Bob's Single Responsibility Principle. Demeter I think is more a specific example of how to follow the Open/Closed Principle. But I think the broader idea behind all these is the same: classes should do one thing and be robust against future changes, which to some degree ActiveRecord is not.
I was talking with a programmer pal of mine about inheritance and its use in designing models. He's a big proponent and I'm a little more tepid. Mostly because I tend to design systems from the bottom up: Database -> Application -> Presentation (honestly, I'm not much of a front-end guy, so I often leave presentation entirely to someone else). I find that relational database systems don't support inheritance without a lot of 1-to-1 relationships to other tables.
If I'm designing from a conceptual standpoint, an Administrator is a User is a Person. Starting from the database up, an Administrator is a User with UserType = "Administrator". Reconciling these approaches seems difficult to me, and therefore I only use inheritance with non-persisted objects.
What's wrong with my thinking? Why is inheritance such an oft-celebrated aspect of OO if it has these inherent incompatibilities with traditional relational structures? Or, is it me who's not mapping inheritance properly to relational data? Is there some guidance out there that might clear this up for me?
Sorry for the rambling question and thanks in advance for your answers. If you include code in your response, C# is my "native language".
You should read the Patterns of Enterprise Application Architecture. It will show you how inheritance patterns can and should be expressed relational structures.
The important thing to remember is the inheritance is conceptual modelling thing. Relational structures and OO code are just materialisations of those conceptual models, and shouldn't be considered the be all and end all of object orientation.
There is an tutorial on this type of thing here. :)
Take this simple, contrived example:
UserRepository.GetAllUsers(); UserRepository.GetUserById();
Inevitably, I will have more complex "queries", such as:
//returns users where active=true, deleted=false, and confirmed = true GetActiveUsers();
I'm having trouble determining where the responsibility of the repository ends. GetActiveUsers() represents a simple "query". Does it belong in the repository?
How about something that involves a bit of logic, such as:
//activate the user, set the activationCode to "used", etc. ActivateUser(string activationCode);
These are all excellent questions to be asking. Being able to determine which of these you should use comes down to your experience and the problem you are working on.
I would suggest reading a book such as Fowler's patterns of enterprise architecture. In this book he discusses the patterns you mention. Most importantly though he assigns each pattern a responsibility. For instance domain logic can be put in either the Service or Domain layers. There are pros and cons associated with each.
If I decide to use a Service layer I assign the layer the role of handling Transactions and Authorization. I like to keep it 'thin' and have no domain logic in there. It becomes an API for my application. I keep all business logic with the domain objects. This includes algorithms and validation for the object. The repository retrieves and persists the domain objects. This may be a one to one mapping between database columns and domain properties for simple systems.
I think GetAtcitveUsers is ok for the Repository. You wouldnt want to retrieve all users from the database and figure out which ones are active in the application as this would lead to poor performance. If ActivateUser has business logic as you suggest, then that logic belongs in the domain object. Persisting the change is the responsibility of the Repository layer.
Hope this helps.
What is the best practise in mapping your database entities to your models and performing business logic? I've seen considerably different implementations of both. I have noticed a number of implementations where the Repository(in the Data Layer) itself is responsible for mapping the database entities to the domain models. For example, a repository that would do this:
public IQueryable<Person> GetPersons() { return DbSet.Select(s => new Person { Id = s.Id, FirstName= s.FirstName, Surname= s.Surname, Location = s.Location, }); }
But having searched comprehensively around SO on N Tier design, I've noticed that while there is no silver bullet, in most situations it's advisable to perform the mapping inside the controller in the MVC project either manually or using a Mapper. It's also been reiterated that the Service layer should never perform the mapping and that it's responsibility should be to perform business logic. A couple of questions here:
Personenities, or increase the age of all
Persons by 10 years, where should this operation be performed. On the model itself? For example would I have a
FullNameproperty on the model which would compute the full name and the age? Or do I define some service inside my service layer to perform business logic?
EDIT
Wow so many close votes. Apologies, I didn't search comprehensively enough. The 'where to perform business logic' issue I've raised here can already be found on SO and elsewhere (although conveyed somewhat cryptically at times):
Validating with a Service Layer by Stephen Walther
Another great, but more generic answer here on SO
Where Should I put My Controller Business Logic in MVC
Does a Service Map Entities to a View Model
However I'm yet to find a standard solution to the mapping question I had, and I think I could have perhaps expressed my question more eloquently. So the general consensus seems to be that business logic goes in the service layer, and mapping the domain models to view models should take place in the controller/presentation layer. And since it's advisable not to surface your DB entities to any layers other than the Data layer, it's recommended to map your entities to domain models at the data layer either manually or via a mapper such as Auto Mapper (This is what I have gathered from reading many articles). My confusion arose from the question of where should mapping entities to domain models and mapping domain models to view models take place. However as I previously alluded to I could have expressed my question more clearly. The reason for my confusion was that I had read that mapping entities to to domain models should happen in the controller, this should rather be rephrased to say "Mapping entities to domain models should happen at the data later, and mapping domain models to view models should take place in the Controller.
It depends... Like you said there is no a silver bullet. One can only list pros and cons of each approach but still it is you who knows your requirements better than anyone else here. If you have time I suggest reading this book Patterns of Enterprise Application Architecture. This will give you a good understanding about different business logic and data source architectural patterns, when and how to use them. What should go to business layer and what should go to DAL. Also the book addresses the issue of how to map from DAL to Domain entities. You may even change you mind and choose absolutely different approach in the long run.
Also consider using some ORM which provides you with a Code First mechanism like EF Code First or NHibernate. In this case all mapping logic will be transparent to you at all.
What is your advice on:
I currently read Fowler. He mentions Money type, it's typcal structure (int, long, BigDecimal), but says nothing on strategies.
Older posts on money-rounding (here, and here) do not provide a details and formality I need.
Thoughts I found in the inet relate to "Round half even" as the best way to balance error.
Thanks for help.
Wikipedia says:
MVC provides front and back ends for the database, the user, and the data processing components. The separation of software systems into front and back ends simplifies development and separates maintenance.
I still don't see the link between the model-view-controller principle and the idea of front- and backend. Can the Model with its access to the database be seen as the Backend and the View as the frontend?
OK.. first the terms:
If you read GUI Architectures and research the MVC pattern in general, you will understand that MVC is not about separation of backend and frontend. Especially when it comes to MVC-inspired patterns, that we use for web applications.
The goal of MVC and related patterns is to separate presentation from domain business logic.
Here are the basic responsibilities of MVC parts:
Let's take an example:
This all can be done with client-side JavaScript. You can have MVC triad running "frontend"! At the same time, the "backend" which provides REST API is an MVC-like structure. Only this time the View is generating JSON responses, instead of HTML.
*Conclusion: You can use MVC pattern both on backend and frontend.**
Since you have been building some applications with Rails, your understanding of MVC might be a but distorted. The reason I say this is because, since RoR was initially made as a prototyping framework (notice all the scaffolding and other features for generating throw-away code), and because of its origin, Rails is actually implementing a very anemic version of MVP.
I call it "anemic", because they nerfed both View (it should be a passive object in MVP, not a simple template) and Model Layer (yes, it is supposed to be a complicated layer, not a collection of ORM instances).
I would recommend for you to read two publications to get a much better grasp on the subject:
The second one is as close as you can get to initial definition of pattern. That, together with "GUI Architectures" article, should provide you a solid footing on the subject. And the PoEAA book (hard read, btw) would give you context in which to expand it.
Having Watched this video by Greg Yound on DDD
I was wondering how you could implement Command-Query Separation (CQS) with DDD when you have in memory changes?
With CQS you have two repositories, one for commands, one for queries. As well as two object groups, command objects and query objects. Command objects only have methods, and no properties that could expose the shape of the objects, and aren't to be used to display data on the screen. Query objects on the other hand are used to display data to the screen.
In the video the commands always go to the database, and so you can use the query repository to fetch the updated data and redisplay on the screen.
Could you use CQS with something like and edit screen in ASP.NET, where changes are made in memory and the screen needs to be updated several times with the changes before the changes are persisted to the database?
For example
A couple of possible solutions I can think of is to have a session repository, or a way of getting a query object from the command object. Or does CQS not apply to this type of scenario?
It seems to me that in the video changes get persisted straight away to the database, and I haven't found an example of DDD with CQS that addresses the issue of batching changes to a domain object and updating the view of the modified domain object before finally issuing a command to save the domain object.
The Unit of Work design pattern from Patterns of Enterprise Application Architecture matches CQS very well - it is basically a big Command that persist stuff in the database.
I am just posting this question so some of you might be able to point me in the right way. I am slowly warming up to OOP, starting to understand the concept. I want to make a good solid core or foundation to be used as a CMS backend. It will also use MVC. I have been using as the MVC- base.
The thing I cant figure out is the following:
Say, on the projectpage in the backend I have 2 sections: htmltext and projects and I should be able to edit them both. The uri would be something like: //domain/backend/projects (the method would be the index and show the 2 sections)
When i click on projects how should it be handled? //domain/backend/projects/projects/ or //domain/backend/projects/list/
One step further, a project will hold some images or a gallery: //domain/backend/projects/edit/5/gallery/2
My question here is, first: would this be a good way to go and even more important how would this be implemented in OOP
the main projects controller:
class projects { function index(){ // view index } function edit{ $project = new Project(); $projectdata = $project->load(5); } }
A single project controller
class project { function __construct(){ $this->projectmodel = $this->loadModel('project_model'); // prepare the model to be used } function load($id){ $this->projectmodel->loadproject($id); } }
project model
class project_model extends model { //extends for DB access and such function __construct(){ // do stuff } function loadproject($id){ return $this->db->query("SELECT * FROM projects where id=" . $id . " LIMIT 1"); } }
Now my question. If this project has images, where should I load the image class to handle those? Should I load it in the project_model like $this->images = new Images(); and have a function inside the model
function loadimages($id){ $this->images->load($id); }
and then images would be something like:
class image extends model { //extends for DB access and such function __construct(){ } function load($id){ return $this->db->query("SELECT * FROM project_images where id=" . $id . " LIMIT 1"); } }
It seems controllers and models gets mixed up this way. Yet in a logical way a project is a container that holds projectinfo, which could be text, images and maybe video's. How would I go about to set that up logically as well.
The first part, about the URLs is something called: Routing or Dispatching. There is quite good article about it in relationship with Symfony 2.x, but the the idea behind it is what matters. Also, you might looks at ways how other frameworks implement it.
As for your original URL examples, galleries will be stored in DB. Won't they? And they will have a unique ID. Which makes this,
/backend/projects/edit/5/gallery/2 quite pointless. Instead your URL should look more like:
/backend/gallery/5/edit // edit gallery with ID 5 /backend/project/3 // view project with ID 3 /backend/galleries/project/4 // list galleries filtered by project with ID 4
The URL should contain only the information you really need.
This also would indicate 3 controllers:
And the example URLs would have pattern similar to this:
/backend(/:controller(/:id|:page)(/:action(/:parameter)))
Where the
/backend part is mandatory, but the
controller is optional. If controller is found , then
id ( or page, when you deal with lists ) and
action is optional. If action is found, additional
parameter is optional. This structure would let you deal with majority of your routes, if written as a regular expression.
Before you start in on using or writing some sort of PHP framework, you should learn how to write proper object oriented code. And that does not mean "know how to write a class". It means, that you have to actually understand, what is object oriented programming, what principles it is based on, what common mistakes people make and what are the most prevalent misconceptions. Here are few lecture that might help you with it:
This should give you some overview of the subject .. yeah, its a lot. But is suspect that you will prefer videos over books. Otherwise, some reading materials:
You will notice that a lot of materials are language-agnostic. That's because the theory, for class-based object oriented languages, is the same.
Be careful with
extends keyword in your code. It means "is a". It is OK, if
class Oak extends Tree, because all oaks are trees. But if you have
class User extends Database, someone might get offended. There is actually an OOP principle which talks about it: Liskov substitution principle .. also there is a very short explanation
I've studied and implemented design patterns for a few years now, and I'm wondering. What are some of the newer design patterns (since the GOF)? Also, what should one, similar to myself, study [in the way of software design] next?
Note: I've been using TDD, and UML for some time now. I'm curious about the newer paradigm shifts, and or newer design patterns.
I'm surprised that no one has mentioned Martin Fowler's book Patterns of Enterprise Application Architecture. This is an outstanding book with dozens of patterns, many of which are used in modern ORM design (repository, active record), along with a number of UI layer patterns. Highly recommended.
The last few days, I have extensively read books and web pages about OOP and MVC in PHP, so that I can become a better programmer. I've come upon a little problem in my understanding of MVC:
Where do I put a
mysql_query?
Should I put it in the controller and call a method on a model that returns data based on the provided query? Or should I put it in the model itself? Are both of the options I'm providing total garbage?
You could have listed the books you were reading, because most (if not all) php books, which touch on MVC, are wrong.
If you want to become a better developer, i would recommend for you to start with article by Marting Fowler - GUI Architectures. Followed by book from same author - "Patterns of Enterprise Application Architecture". Then the next step would be for you to research SOLID principles and understand how to write code which follows Law of Demeter. This should cover the basics =]
Not really. At least not the classical MVC as it was defined for Smalltalk.
Instead in PHP you have 4 other patterns which aim for the same goal: MVC Model2, MVP, MVVM and HMVC. Again, I am too lazy to write about differences one more time, so I'll just link to an old comment of mine.
First thing you must understand is that Model in MVC is not a class or an object. It is a layer which contains multitude of classes. Basically model layer is all of the layers combined (though, the second layer there should be called "Domain Object Layer", because it contains "Domain Model Objects"). If you care to read quick summary on what is contained in each part of Model layer, you can try reading this old comment (skip to "side note" section).
The image is taken from Service Layer article on Fowler's site.
Controller has one major responsibilities in MVC (I'm gonna talk about Model2 implementation here):
Execute commands on structures from model layer (services or domain objects), which change the state of said structures.
It usually have a secondary responsibility: to bind (or otherwise pass) structures from Model layer to the View, but it becomes a questionable practice, if you follow SRP
The storage and retrieval of information is handled at the Data Source Layer, and is usually implemented as DataMapper (do not confuse with ORMs, which abuse that name).
Here is how a simplified use of it would look like:
$mapper = $this->mapperFactory->build(Model\Mappers\User::class); $user = $this->entityFactory->build(Model\Entities\User::class); $user->setId(42); $mapper->fetch($user); if ($user->isBanned() && $user->hasBannExpired()){ $user->setStatus(Model\Mappers\User::STATUS_ACTIVE); } $mapper->store($user);
As you see, at no point the Domain Object is even aware, that the information from it was stored. And neither it cases about where you put the data. It could be stored in MySQL or PostgreSQL or some noSQL database. Or maybe pushed to remote REST API. Or maybe the mapper was a mock for testing. All you would need to do, to replace the mapper, is provide this method with different factory.
There are numerous ways to connect and interact with the database layer. In Java, for example, common usages are JDBC calls of raw SQL, object relational mappers, JDBCTemplate (Spring), stored procedures, etc.
In your language, which option is your preference and why? When would you consider the others?
ActiveRecord, which is a pattern documented first (I think) in Fowler's Patterns of Enterprise Architecture. I believe it's implemented in languages other than Ruby, although it's well-known as a core technology in Rails. Whatever, it's a neat abstraction of the database, although I have to confess that I find it a bit clunky and in the find_by_sql area. But that may just be me.
But (putting on Grumpy Old Man hat now) all the ORMs in the world are no substitute for a good knowledge of SQL, without which I really don't like to see access to an RDBMS being permitted at all. have identified below layers to be implemented in my application. According to my knowledge multi layered architecture is prefered for an enterprise application.
I have chosen Symfony2 as the framework to be used in the app. Symfony2 has MVC architecture built into it. And the above layers exist as below.
Model is consisted with Business Layer, Data Access Layer, Service Layer. See the below image that is borrowed from Martin Fowler's book, Patterns of Enterprise Application Architecture.
My problem is,
is there a way to decouple the model into separate Business Layer, Data Access Layer, Service Layer ? How to implement multi Layered architecture in PHP ? Should I use some other framework(If only it supports layered architecture) ?
Below link was helpful
How should a model be structured in MVC?
How to build n-layered web architecture with PHP?
What is difference of developing a website in MVC and 3-Tier or N-tier architecture?
MVC application. How does mult-tier architecture fit in?
Implementing a service layer in an MVC architecture
Achieving 3-tier architecture with Symfony PHP
If you are serious about even trying to make something MVC-inspired, then your only options are Synfony and Zend. The others are just sad Rails-clones, that implement some bastardization of PAC. And Sf2.x seems to better one of two .. but that's like pointing out smartest kid in the remedial class.
Unless you are able and willing to make your own framework, Sf2.x will be your best bet.
I think the main problem here is the misunderstanding what "model" in MVC is. Model is a layer, not any specific class or object. In the picture from Fowler's book, the "MVC model" will be all three concentric circles taken together.
You basically have two major layer in MVC: model layer and presentation layer. The presentation layer interacts with model layer through services, that let you abstract the underlaying domain business logic.
Do not confuse it with "domain model". Domain model is an idea that describes all the knowledge and expertise from client's business, that is used in creation of application. Usually it will be represented as separate domain objects and the interaction of said objects.
Also, this picture is somewhat misleading. The data abstraction layer (usually implemented as collection of data mappers) is not lower level structure then domain model. Both domain objects and storage abstractions are uses by service (either directly or through repositories & units of work). It is referred to as "application logic" and is a direct responsibility of services.
Bottom line: no, you do not need to pick some different framework. You only need to learn more about application architecture. You should read Patterns of Enterprise Application Architecture .. that would be a good place to start.
My 2 cents
Originally there was the DAL object which my BO's called for info and then passed to UI. Then I started noticing reduced code in UI and there were Controller classes. What's the decent recomendation.
I currently structure mine
Public Class OrderDAL Private _id Integer Private _order as Order Public Function GetOrder(id as Integer) as Order ...return Order End Function End Class
then I have controller classes (recently implemented this style)
Public Class OrderController Private Shared _orderDAL as new OrderDAL Public Shared Function GetOrder(id) As Order Return _orderDAL.GetOrder(id) End Function End Class
Then in my application
My app Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click msgbox(OrderController.GetOrder(12345).Customer.Name) End Sub End app
I originally found that with the Shared Class I didn't have to keep creating a new instance of the DAL whenever I need to fetch data
Dim _orderDAL as New OrderDal _orderDAL.GetOrder(1234) .....
What's your take?
Thanks
I think there are several alternatives listed in this excellent book: Patterns of Enterprise Application Architecture. Some patterns that may be of interest to you:
The backend of the company's internal system is getting complicated and I would like to explore the idea of doing a SOA style architecture instead of a heavy monolithic system. Where shall I start?
I'm new to SOA. Does it mean... individual CF instances, and they talk to each other through remote web-services calls? How would one deal with things like... error handling and server outage? Would an ESB be beneficial to the architecture if every part of it are in ColdFusion?
How about the DB layer? Should they share one huge DB or should they store things in their own way themselves?
Thank you
First, what are you trying to accomplish? SOA is useful for systems that need to change relatively easily and be accessible to a new programmer. On the other hand you tend to have a performance tradeoff because you end up segregating persistence - so that its interaction happens on the application server rather than in the database. Usually this performance tradeoff is a non-issue, but if you're working on a high through put transactional system you may need to compromise by putting some algorithms in the database and having those violate your services breakdown.
So if you want the pros and are not particularly concerned with the cons, start by reading some applicable books on the topic:
What you want to trend towards is a design with high level service objects whose relationships are managed via dependency injection through a service locator container. In ColdFusion's case ColdSpring is an example. This then allows for object mocking so you can unit test easily. For example, if the services live on other servers, then the have service objects locally that are proxies to be passed in as dependencies. For testing these proxies are mocked so they don't have to talk to the remote server.
Regarding error handling and server outages. I'm assuming you are primarily concerned about dealing with issues outside of the local server's control. This is another reason use a service proxy object. Have this object be responsible for dealing with timeouts, bad response values and the like - effectively an anti corruption layer.
As for database sharing, I would construct my table relationships to reflect my service object relationships. So if the tables in question have data that only relates through services I would not enforce any foreign key constraints. So while they could be in the same database, it doesn't matter. This enables you to move those services and database tables elsewhere with relatively little change to code.
this my first question on here. I hope someone can help.
It is to do with good object oriented design practices.
I am writing an android app, but the question is a general one and would apply equally to (e.g.) a swing user interface.
For the sake of argument, say I have a class Student.
public class Student { public int StudentID; public String firstName; public String lastName; }
There is a principle that you should rarely ask an object for information about itself, rather you should tell it what you want it to do, and let the object do the work itself. To this end, I have the following methods
public class Student { public int StudentID; public String firstName; public String lastName; // Constructors public Student () {} public Student (int StudentID){ populateFromDataBase (StudentID); } private void populateFromDataBase (int StudentID){ // Get the data from the database and set the // values of all the properties of this } public void save (){ // Save the values of the properties to db } }
This is so that other classes may use this class without caring how it persists it's information.
Disclaimers: I know I am not using accessors, just public properties. I'm just trying to keep this example simple.
Don't ask how an external class would know a StudentID, That's irrelavent to the question I want to ask, which is this:
(Say) I want to draw a table of students and their details to the screen. From the UI class (Say a ListActivity in android) I could get an array of students, then loop through them, setting the properties of my ListView as I go. The problem I have with that is that I seem to be thinking far too procedurally, and not in the true spirit of object oriented design. It also requires asking each student object about itself, violating encapsulation.
Apparently (from what I read) the student should draw itself.
Here's where I get confused. How can a student draw itself when it knows nothing of the UI? Do I pass a reference to the UI to the student object? Does this break the separation of presentation and business layers or not? What is considered good practice? Are there any articles or design patterns out there, preferably with example code, because I could not find any? Am I worrying about something not that important and should I just go with my first messy idea?
I really would appreciate any input, as clearly this is an issue that will reoccur with anything that I code.
Another possibility I considered was accessing the database directly from the UI and binding to a cursor, but that just seems wrong. or is it?
Opinions may vary, but IMHO, objects should not draw themselves or for that matter, save themselves to database.
For drawing, I would generally implement some form of double dispatch, such as the Visitor Pattern.
For separation from the UI, you should also consider Model-View-Controller, Model-View-Presenter or Model-View-ViewModel.
Persistence of objects can be rather more complex, and might involve assorted patterns, as described in Martin Fowler's Patterns of Enterprise Application Architecture and summarized at Fowler's website catalog.
And of course, the UI should not bypass the model and go straight to the database.
When designing business objects I have tried several different methods of writing the data access layer. Some have worked out better than others but I have always felt there must be a "better" way.
I would really just like to see the different ways people have handled the DAL in different situations and their opinon of how the technique worked or didn't work well.
Unfortunately I don't think there is a "better way", it's too dependent on the specific situation as to what DAL approach you use. A great discussion of the "state of the art" is Patterns of Enterprise Application Architecture by Martin Fowler.
Chapter 10, Data Source Architectural Patterns specifically talks about most of the most commonly used patterns for business applications.
In general though, I've found using the simplest approach that meets the basic maintainability and adaptability requirements is the best choice.
For example on a recent project a simple "Row Data Gateway" was all I needed. (This was simply code generated classes for each relevant database table, including methods to perform the CRUD operations). No endless debates about ORM versus stored procs, it just worked, and did the required job well.
There are several common patterns. 'The patterns of enterprise architecture' book is a good reference for these:
If you use an ORM, such as llblgen, you get the choice of self-servicing or adaptor.
Anyone have a code review checklist/guidelines for a Java web application? A review checklist for each layer would be great.
If there is no such source on the internet, do you know of any good books? I found some useful tips in "Expert One-On-One..." by Rod Johnson. Any other good books you recommend?
In a respones to a question in the same area that I've ased, someone suggested the book Patterns of Enterprise Application Architecture by Martin Fowler. I've yet to read, but it seems to have good reviews..
How can i start with the class design before starting the development of a large application (both WinForm and WebApp). What are the initial 'be-careful' things i should check before designing the class structures?
How to identify the usage of interfaces, abstract classes, delegates, events etc in my application design?
A thorough answer to this question would require a book, not a StackOverflow post! In fact, there are quite a few books about this already, like Martin Fowler's Patterns of Enterprise Application Architecture. Here are some general pointers:
Make sure you understand the portion of the problem domain that you're working on. Have you talked to your customers to run things by them first? Does your domain model match the way they think about the world?
Statistically speaking, your application is unlikely to be special. That means that if someone insists that they need a particular architecture or implementation pattern for things to work (e.g. an enterprise service bus, message queues, etc.), you should view it with a skeptical eye. Consider whether another approach might not be better.
Isolate unrelated portions of the application from each other structurally as well as logically. Don't just loosely couple or decouple your classes; make them entirely separate projects that must be built separately.
Code to interface, not implementation. If a number of classes all do something similar, make an interface. Don't use abstract base classes as pseudo-interfaces. Depend on the interface and pass that around instead of the individual implementing classes.
Understand the larger scope of the application. What business purpose does it serve? How will it help people, accomplish goals, or improve productivity? Are the things that you're building aligned with that purpose? Make sure you're not building for the sake of building.
Be wary when someone tells you that this is an "enterprise application". This is a loaded term with too many connotations for too many different people. Make sure to be clear about cross-cutting concerns like security, authorization, authentication, service guarantees, et cetera.
Enterprise applications are prone to bloat. Don't be afraid to say "no" to new features, and refactor mercilessly with good unit tests to make sure you're getting the most bang for your buck.
Finally, everything in moderation. Taking any of the above (or anything in general, really) to an extreme is a bad idea. The only thing you should really do in the extreme is moderation itself! :)
I am asking this question because of Zend Framework. I really like it but still the way it offers modularity is not really flexible. For instance a lot of us at first create default and admin module , but in reality it is not reusable. In fact admin should be not a module but some paradigm that takes care of every single module's admin side (like a plug-in manager).
So, anyhow, is there are Good book on Architecture of Modular Web Application?
P.S. sorry if this is a duplicate.
Yes, there are several. But there is only one that is an absolute must read in the web framework world:
Patterns of Enterprise Application Architecture:
It is the de facto standard book with all patterns used through Zend Framework, and almost all other web frameworks. The Front Controller, the Router, the Service Layer, they all come from this book.
You can find a good summary on the web at Fowler's site
Take a look at Zend Framework 1.8 Zend Application Development.
There is a chapter about creating administration module, which is something you called 'some paradigm…'. trying to enhance my knowledge in "design" aspects of Software (engineering), and I am more into Java world.
The first thing I came across was GoF book, which as per my understanding is the "core" or "foundation" design patterns (please correct if i am wrong in interpreting it).
I came across the following terms as i try to go in depth for getting the knowledge of design (patterns).
1)
J2EE design pattern.
2)
Patterns of Enterprise application architecture.
3)
GoF patterns.
I am bit confused as to why there are many design patters and which is used when? In particular, what is the difference between the patterns in #1 and #2?
Any explanation in simple words would be of great help.
Just as there are many books on programming, there are many books on patterns; so the simplest answer to, "what is the difference" is: those three books were written by different authors.
The GoF book (3) was the first to apply the concept of patterns to software engineering, so in that sense, I think most people would agree that it was the "foundation" for subsequent, pattern-related work.
Do note that architectural patterns and design patterns are separate concepts, as architecture and design represent different levels of abstraction (architecture being a higher level).
Any detailed explanation of when to apply each of these patterns would require a much longer format than SO (hence the aforementioned authors' motivation to publish books) however, most if not all common patterns will have numerous individual threads on SO.
Finally, a key difference in J2EE Patterns is that they are language specific (Java) whereas the other two books are language agnostic.
I have a method in MyWebpage.aspx.cs lik so:
public partial class MyWebpage : PageBase { private readonly DataAccessLayer dataAccessLayer; protected string GetMyTitle(string myVar, string myId) { if (string.IsNullOrEmpty(myVar)) { return string.Empty; } return dataAccessLayer.GetMyTitle(Convert.ToInt32(myId), myVar); } }
In the DataAccessLayer class, I have a methoud that talks to the DB and does the DAL stuff and returns the title.
What's the best practice on accessing the DAL from MyWebPage.aspx.cs class (as in do I need to create a new DataAccessLayer() object each time? Where should I create it in my PageBase class or everytime I call it in a code behind?
First thing is accessing DAL from your code behind or presentation layer generally is not a good practice. Because in this case you need to put your Business logic code in your code behind(Presentation Layer) which causes conflicting of concerns, high coupling, duplication and many other issues. So, if you're look for the best practices, I suggest to have a look at the following links:
And these are really good books:
Also about having static function for calling the DAL. As you know static functions are vulnerable to the multi-threading, so if you are using anything shared in the DAL functions (which its the case sometimes, like shared connection, command etc) it will break your code, so I think it's better to avoid static functions in this layer.
I'm designing this collection of classes and abstract (MustInherit) classes…
This is the database table where I'm going to store all this…
As far as the Microsoft SQL Server database knows, those are all nullable ("Allow Nulls") columns.
But really, that depends on the class stored there: LinkNode, HtmlPageNode, or CodePageNode.
Rules might look like this...
How do I enforce such data integrity rules within my database?
UPDATE: Regarding this single-table design...
I'm still trying to zero in on a final architecture.
I initially started with many small tables with almost zero nullalbe fields.
Which is the best database schema for my navigation?
And I learned about the LINQ to SQL IsDiscriminator property.
What’s the best way to handle one-to-one relationships in SQL?
But then I learned that LINQ to SQL only supports single table inheritance.
Can a LINQ to SQL IsDiscriminator column NOT inherit?
Now I'm trying to handle it with a collection of classes and abstract classes.
Please help me with my .NET abstract classes.
It looks like you are attempting the Single Table Inheritance pattern, this is a pattern covered by the Object-Relational Structural Patterns section of the book Patterns of Enterprise Application Architecture.
I would recommend the Class Table Inheritance or Concrete Table Inheritance patterns if you wish to enforce data integrity via SQL table constraints.
Though it wouldn't be my first suggestion, you could still use Single Table Inheritance and just enforce the constraints via a Stored Procedure.
I'm looking for a good online resource of software patterns. Preferably something with a comprehensive selection and concise well written explanations, not just a collection of links. .Net examples would be nice, but not essential.
Grab this book:
P of EEA By Martin Fowler
Here's the online info of that book
I recommend Head First Design Patterns book from Freemans. These are general design patterns applicable in most OO languages. I recommend this book as an introductory book to design patterns. After this book the GOF book would be another recommendation (but not as a first book).
I'm looking forward to start developing a new server side enterprise communication framework in Java and I'm wondering if anyone knows a good book on the subject? Some best practices and advice would be welcome.
Thanks.
A couple of recommended books:
I'm trying to figure out the best way to map inheritance relationships in an object model into a relational database. For example consider the following class structure.
public Class Item { public String Name{get; set;} public int Size {get; set} } public Class Widget:Item { public String Color{get; set;} } public Class Doohicky:Item { public String Smell{get; set;} }
Here's are a few options I'm considering for how to save this structure to a database.
Options 1: Single Table for all item types
Items Table: ItemID, Name, Color, Smell
This sucks as it would require NULL values.
Options 2: Separate tables for each item type
Widgets Table: WidgetID, Name, Color Doohicky Table: DoohickyID, Name, Smell
This is better, but would be more difficult to list all Items
Options 3: Linked tables
Items Table: ItemID (PK), Name, Size Widgets Table: WidgetID (PK), ItemID (FK), Color Doohicky Table: DoohickyID (PK), ItemID (FK), Smell
I think this option is the best, as it prevents me from having Null values in any fields, plus it will make it easier to list all the Items, and/or create a list of a specific type of Item (Widgets or Doohickies).
However, I'm not sure how to create the relationship between the Items table and the Widgets and Doohickies tables. I don't want to end up with row in either table referencing the same ItemID in the Items table.
For example when I add an entry to the Widgets table, how can I ensure that it is linked to a new entry in the Items table with a unique ItemID? Should I instead only track the ItemID rather then separate type specific IDs like WidgetID and DoohickyID, and use it to create a one to one relationships between the Items table and the type specific tables?
Options 4
Items Table: ItemID (PK), Name, Size Widgets Table: ItemID (PK), Color Doohicky Table: ItemID (PK), Smell
You're describing Single-Table Inheritance, Concrete Table Inheritance, and Class Table Inheritance.
See the classic Martin Fowler book Patterns of Enterprise Application Architecture for lots of information about them.
One thing you can do to make sure only one sub-type can reference the super-type is to use an
ItemType field. A foreign key can reference the columns of a UNIQUE constraint as well as a PRIMARY KEY. So you can add
Items.ItemType and make a unique constraint over
ItemID,
ItemType. The foreign key in each sub-type table references this two-column unique key. And then you can constraint the
ItemType in each sub-type table to be a certain value. Then it must match the value in the super-type table, which can only have one value.
Items Table: ItemID (PK), ItemType, Name, Size UNIQUE: (ItemID, ItemType) Widgets Table: ItemID (PK), ItemType, Color FK: (ItemID, ItemType) CHECK: ItemType=W Doohicky Table: ItemID (PK), ItemType, Smell FK: (ItemID, ItemType) CHECK: ItemType=D
In my analysis of the newer web platforms/applications, such as Drupal, Wordpress, and Salesforce, many of them create their software based on the concept of modularization: Where developers can create new extensions and applications without needing to change code in the "core" system maintained by the lead developers. In particular, I know Drupal uses a "hook" system, but I don't know much about the engine or design that implements it.
If you were to go down a path of creating an application and you wanted a system that allowed for modularization, where do you start? Is this a particular design pattern that everyone knows about? Is there a handbook that this paradigm tends to subscribe to? Are their any websites that discuss this type of development from the ground-up?
I know some people point directly to OOP, but that doesn't seem to be the same thing, entirely.
This particular system I'm planning leans more towards something like Salesforce, but it is not a CRM system.
For the sake of the question, please ignore the Buy vs. Build argument, as that consideration is already in the works. Right now, I'm researching the build aspect.
The Plugin pattern from P of EAA is probably what you are after. Create a public interface for your service to which plugins (modules) can integrate to ad-hoc at runtime. want to make a perfect custom DAL (data abstraction layer) class to use with all my projects.
I've searched the internet and found some samples for this but I never know which is the best approach.
Is it to make
[Attributes]? Or use
<Generics> or something else?
So please just give me a head line and I'll go on from there.
Thanks again and forgive my language.
Definitely don't write your own persistence manager. You should use an Object-Relational Mapper (ORM) if you want to start from a class structure and have the ORM generate the SQL table structures for you, or use an SQL Mapper if you want to start from SQL tables and want to have your classes represent table rows.
I've had great experience using the iBatis SQL Mapper, and a lot of people like Hibernate for an ORM (though there's a learning curve).
Martin Fowler describes several good approaches for writing data access layers in Patterns of Enterprise Application Architecture (here's a catalog).
For instance, iBatis for .NET uses Fowler's Table Data Gateway pattern. In iBatis you specify Table Data Gateway objects in XML. Each Gateway typically governs access to one SQL table, although you can do multi-table operations too. A Gateway is composed of SQL statements, each wrapped in a bit of XML. Each SELECT returns one or more row objects, which are just sets of attributes plus getter and setter methods (in .NET these are called POCOs or PONOs, Plain Old C# Objects or Plain Old .NET Objects.). Each INSERT or UPDATE takes a POCO as its input. This seemed pretty intuitive, and not too hard to learn.
Scenario: You have a reasonably sized project with some forms, classes, etc.
Question: How would you group your functions? Would you put the functions common to all the forms into one separate class or a number of classes depending on function? What about your database calls. Would you have one class that contained all your database functions? Or would your create a utility class that would handle the calls?
Reason: I'm looking for some direction on how to best "group" functions. For instance I can see that having all the database functions in one class would make it easier to change/debug later, but is that necessary? I'm partial to the utility that handles all the connections for you and returns the formatted result but the SQL code does end up all over the place.
End Note: I know there are a lot of questions but as I said I'm looking for direction, you don't have to take every answer and answer it to a T but some coding guidelines or some coding wisdom from past experiences would be greatly appreciated
Many thanks,
Check out 3-Tier Architecture. But if you're looking for something more in depth, there are many great books on the subject of Application Architecture. You could try Patterns of Enterprise Application Architecture.
For a system where a user can be a
member or
admin, users with the
member role must either pay for it with a recurring subscription, or be given a complimentary access.
My current approach:
userdatabase table.
subscriptiontable includes a record for a user if they have a subscription.
subscription_eventtable records each billing or failed payment. I can query this to see if the last event was indeed a successful payment.
But how should I record if a user is given "complimentary" access?
complimentary_subscriptionwith the user ID as the foreign key?
subscription?
is_complimentaryand
complimentary_expires_date?
expirescolumn to the user row?
Question Review
As @leanne said, you're modeling a
Subscription whose specializations are, say,
MonthlySubscription and
ComplimentarySubscription (to give them a name for this answer).
You know that a subscription can expire:
MonthlySubscription, that happens when a user didn't pay the current month's subscription
ComplimentarySubscription, the expiration date is assigned when it’s assigned to the user
As you can see an
ExpirationDate is an essential attribute of any
Subscription, but the way you store it is different in each case. If the first case you'll have to calculate it based on the last event, in the latter you can retrieve it directly.
Dealing with Inheritance in the Data Base
So, to map this sample model to a database schema, you could go with the Class Table Inheritance pattern described in Martin Fowler's Patterns of Enterprise Application Architecture book. Here is its intent:
"Represents an inheritance hierarchy of classes with one table for each class".
Basically you'll have a table with the attributes shared in common between the classes, and you'll be storing attributes specific to each class in a separate table.
Keeping this in mind, let's review the options you proposed:
- Have another table
complimentary_subscriptionwith the user ID as the foreign key?
Having a separate table for storing
ComplimentarySubscription specific details sound good, but if you don't relate this one with the
subscription table, you can end up with an user that has both a
MonthlySubscription and a
ComplimentarySubscription. Its foreign key should point to the
subscription table, which is the one that tells you if a user has a subscription or not (and you'll have to enforce up to one subscription per user).
- Record a special "subscription" for them in
subscription?
Yes, you'll have to record that a user has a subscription either monthly or complimentary. But if you're thinking something like recording a special subscription whose amount is zero, you’re looking for a solution that matches your current design instead of searching for the right model for it (it might and it might be not).
- Or add another column to their user row for columns like
is_complimentaryand
complimentary_expires_date?
Personally I don't like this one because you're putting information where it doesn't belong. Taking that into account, where you will be storing the expiration date for complimentary subscriptions (remember that for monthly ones you are calculating it, and not storing the expiration date)? It seems that all that information is crying for its own "home". Also, if later you need to add a new type of subscription that table will begin to clutter.
- Add a more general
expirescolumn to the user row?
If you do this, you'll have to deal with data synchronization each time the
subscription_event gets changed (in the case of a monthly subscription). Generally I try to avoid this data-duplication situation.
Sample Solution
What I would do to favor extensibility when adding new types of subscription is to have the
subscription table to store shared details between
MonthlySubscripton and
ComplimentarySubscription, adding a
type column key that'll let you differentiate which kind of subscription a row is related to.
Then, store details specific to each subscription type in its own table, referencing the parent
subscription row.
For retrieving data, you'll need an object in charge of instantiating the right type of
Subscription given the
type column value for a
subscription row.
You can take a look at the pattern in the "Patterns of Enterprise Application Architecture" book for further assistance on how to define the
type column values, how to use a mapper object to do the
Subscription instantiation and so on.
01/03/2012 Update: Alternatives for defining and handling the
type column
Here's an update to clarify the following question posted by @enoinoc in the comments:
Specifically for the suggested
typecolumn, could this be a foreign key pointing to a
Planstable which describes different types of subscriptions, such as how many months before they expire without payment. Does that sound logical?
It's ok to have that information in the
Plans table, as long it´s not static information that doesn´t need to be edited. If that's the case, don't over-generalize your solution and put that knowledge in the corresponding
Subscription subclass.
About the foreign key approach, I can think of some drawbacks going that way:
Subscriptionto use for each row in the
Planstable. If all you got is a foreign key value (say, an integer) you'll have to write code to map that value with the class to use. That means extra work for you :)
Proposed Solution
What I'd do is to put into the
type column in the
Plans table. That column will hold the name of the class that knows how to build the right
Subscription from a particular row.
But: why do we need an object to build each type of
Subscription? Because you'll be using information from different tables (
subscription_event and
complimentary_subscription) for building each type of object, and it´s always good to isolate and encapsulate that behavior.
Let's see how the
Plans table might look like:
-- Plans Table --
Id | Name | Type | Other Columns...
1 | Monthly |
MonthlySubscriptionMapper |
2 | Complimentary |
ComplimentarySubscriptionMapper |
Each
SubscriptionMapper can define a method
Subscription MapFrom(Row aRow) that takes a row from the database and gives you the right instance of the
Subscription subclass (
MonthlySubscription or
ComplimentarySubscription in the example).
Finally to get an instance of the mapper specified in the
type column (without using a nasty
if or
case statements) you can take the class name from the column's value and, by using reflection, create an instance of that class.
I am designing an API and I would like it to be simple to use. So, if I have Customers, Statements, and Payments. Does it make sense to have objects such as: Customer, CustomerHandler, Statement, StatementHandler, Payment, PaymentHandler? This way when the developer wants to do something with customers he/she knows to create a CustomerHandler and then all of the possible functions that one would like to perform with a customer are inside the handler.
Methods like: CustomerHandler: AddCustomer(customer), GetCustomer(customerID), GetCustomerCount()… StatementHandler: AddStatement(customerID), GetStatement(statementID), GetStatementCount(customerID)… PaymentHandler: GetPaymentsByCustomer(customerID), GetPayment(paymentID), GetPaymentCountByCustomer(customerID)…
This way if the developer wants to work on receiving payments he/she knows to go to the PaymentHandler. My coworker thought that functions like GetPayments(customerID) belong in a class that manages the customer. So, it would be like Customer.GetPayments() AS Payments. But if I have some other entity like Worker, there would be Worker.GetPayments() AS Payments. So, I see the logic with both approaches. The first one groups things together so that if no matter whom the payment is coming from you get it all from one class by having functions like GetPaymentsByCustomer(CustomerID) and GetPaymentsByWorker(WorkerID). This way one does not have to stumble through different handler or manager objects to get payments. Both approaches make sense to me, how about you? Or, are we both off and there is a better way to do this? Thanks in advance!
You are pretty much describing two ways (patterns) for data access:
Please, get Martin Fowler's book Patterns of Enterprise Application Architecture and read through all pros and cons. In particular if you try to expose your objects and APIs as web services you might want to go with the data mapper approach (as you suggest).
Active record is very popular because it is simpler. Decide for yourself what suites your needs the best..
I am reading the book Patterns of enterprise application architecture. While going through the basic patterns - such as Registry pattern I am finding that possibilities that these patterns which were first published in Nov,2002 may not be the best possible solutions to go for.
For example take the Registry pattern. In our organization we use simple JDBC calls for db operations and if needed pass the connection object for a single transaction. This approach is not the best - but the alternative of using Registry pattern also is not seeming good as the dependency would then not be visible - can be an issue for testing. Dependency Injection is suggested as a better way to implement this behavior.
Can anyone who has worked on Java EE web/enterprise apps comment on this - and what would you recommend to analyze the usage of each pattern (its pros and cons?). Any recent book that does a coverage of this in detail?.
(...) Any recent book that does a coverage of this in detail?
I recommend Adam Bien's Real World Java EE Patterns if you're looking for an up to date coverage of patterns and best practices with Java EE 5 and 6:
I am writing RESTful services using spring and hibernate. I read many resource in internet, but they did not clarify my doubts. Please explain me in details what are DAO, DTO and Service layers in spring framework? And what are concern use these layers in spring to develop RESTfull API services.
First off, these concepts are Platform Agnostic and are not exclusive to Spring Framework or any other framework, for that matter.
DTO is an object that carries data between processes. When you're working with a remote interface, each call it is expensive. As a result you need to reduce the number of calls.. It's often little
more than a bunch of fields and the getters and setters for them.
A
Data Access Object abstracts and encapsulates all access to
the data source. The
DAO manages the connection with the data source to
obtain and store data.
The DAO implements the access mechanism required to work with the data source.
The data source could be a persistent store like an
RDBMS, or a business service accessed via
REST or
SOAP.
The
DAO abstracts the underlying data access implementation for the
Service objects to
enable transparent access to the data source. The
Service also delegates
data load and store operations to the
DAO.
Service objects are doing the work that the
application needs to do for the domain you're working with. It involves calculations based on inputs and
stored data, validation of any data that comes in from the presentation, and figuring out exactly what data
source logic to dispatch, depending on commands received from the presentation.
A
Service Layer defines an application's boundary and its set of available operations from
the perspective of interfacing client layers. It encapsulates the application's business logic, controlling
transactions and coordinating responses in the implementation of its operations.
Martin Fowler has a great book on common Application Architecture Patterns named Patterns of Enterprise Application Architecture. There is also, Core J2EE Patterns that worth looking at.
I have a question. I need to create a little thing to do with products. Now I can have say 7 different types of products. Some are subtypes of others e.g.
Cars - Vans - petrol - diesel - City - Hatchback - Saloon - Estate - petrol - diesel
Now, for the sake of the argument all my City, Hatchback and Saloon cars are hybrid/gas/whatever and I do not plan to sell petrol and diesel ones. However there is a possibility that I may have petrol and diesel saloon cars sometime in the future, but it's not like I am going to have 20+ types of products. If it is going to go up I will have probably 2-3 more types.
From what I understand
Prototype Pattern may be good one here for I will be able to avoid duplication between estate->petrol and van->petrol ... but then again Van cars will have different characteristics than say city car e.g. maximum loading dimensions.
I have been reading extensively about design patterns and one thing I remember for certain is not to use pattern when you don't need it. Now the question is - do I need it?
Thanks!
The Decorator Pattern is probably the most straight forward one to use and would be a good one to extend concrete objects functionality and/or characteristics.
Here is some light reading: Head First Design Patterns - CH3 pdf
FYI, couple must have's for learning and referencing design patterns regardless your language of choice:
1) Head First Design Patterns
2) Patterns for Enterprise Application Architecture
3) Design Patterns: Elements of Reusable Object-Oriented Software
And sites:
2) StackOverflow Design Patterns Newbie
There are a few others, I'll have to dig them up.. my company we have developped some applications. We have to create an API for one application (say application A), so that the others can use it (an its data).
The question is : we already have developped PHP classes for the model of Application A, if we want to create an API, should we :
- re-use these classes (too much functionnalities for an API, too heavy...)
- create one PHP class, with some basic functions, that takes in input and returns only raw values (like strings, array... NOT complex classes)
- create another set of PHP classes, simpler, and designed only to be used by an external application (so only to get data easily)
Usually, an API is the 2nd solution (to be used as well with PHP than as a web service for example), but i find it too bad that we made a complex and usefull class modelisation, then we tear it apart just to have functions, strings and array. The 3rd one seems to me to be the compromise, but a collegue of mine insist that this is not an API. Too bad...
What do you think ?
Solution number 3 might be the best one from an architectural point of view. Basically you are using a Facade Design Pattern to simplify your API. Since I am dealing with it at the moment: In Patterns Of Enterprise Application Architecture this approach is described as the service layer which makes totally sense since you don't want to expose any user (meaning whoever will deal with your API) to more complexity than is actually needed or desired.
This includes using the easiest possible interface and transfer objects (raw values if they make sense). As soon as your Facade is being called through remoting services (like a webservice) you will eventually have to break repsonses and requests down to raw values (data containers) anyway.
In the search for resources to become a better developer, I am looking for good examples of how to structure the code in n-tier applications.
Like... What does the business object do and look, how does it interface with the data access layer etc. How does the UI interface the business layer, and does it interface the DAL directly.
Do you know of great examples freely available, that are worthy of study?
Also, the book Enterprise Application Architechture by Martin Fowler is a must read. Google it or use the amazon link provided. Enterprise Application Architecture on Amazon.
Have a look at this example:, which was developed as the example for this book:
It's .net 2.0 and not perfect, but it's a great example of an n-tier application that makes good use of the provider model. We've adapted the pattern and use if for 90% of our in-house development. Make sure you don't confuse this pattern with the MVC pattern, as they are quite different.
Have a look at the wikipedia article on n-tier architecture:. The presentation tier is implemented as pages and user controls in the example I have given, the logic tier (commonly called BLL or business logic layer) is concrete C# classes defining specific behaviour, and the data tier (commonly called DAL or data access layer) is abstract C# classes defining the storage mechanism with concrete C# classes for using Sql Server as the storage medium.
Hope this helps. want all my layers BLL ,DAL and UI to share classes (concrete or interfaces).
Is this really a bad practice?
I prefer not to return datatables from my DAL methods but instead to return objects that BLL can use directly.
I want to have a separate VS project with the classes that all layers should know about.
Example: I want to define a lot class that all layers should be aware of. UI should be able to receive lot classes in order to display or make possible for the user to submit a lot to be processed. Also DAL should be able to query the db with lot classes and return them. BLL on the other hand should get these lots and apply business rules onto them.
If this is completely wrong what are the alternatives?
I want all my layers BLL,DAL and UI to share classes (concrete or interfaces).
It depends on what type of classes. If you need them all to access common domain entities, then sure.
The important part is what you allow those layers to do with those classes. Your client/UI layer shouldn't be able to modify the domain entities (and persist them) without going through some centralized business logic. This means your DAL shouldn't be accessible by your UI, although they can both share common entities, interfaces, etc...
A common approach is something like this:
UI -> BLL -> DAL -> Persistence storage (DB, file, etc...)
Each of those layers can access commmon classes. As long the UI can't directly access the DAL, you should be okay. You have a couple of options for this:
You end up with something like:
UI -> Service -> BLL -> DAL -> Persistence storage (DB, file, etc...)
I would strongly recommend Patterns of Enterprise Application Architecture by Martin Fowler. It will provide you with a good foundation for layering your application.
I prefer not to return datatables from my DAL methods but instead to return objects that BLL can use directly.
That's a good idea. This is where the idea of ORM comes into play. The DAL will typically know how to talk to the DB, and the DAL will also know how to convert DB-specific structures into your domain model. Domain entities go into, and back out of, the DAL. Within the DAL, domain entities are converted to DB-specific structures when persisting data. And the reverse happens: when the BLL requests data, the DAL retrieves the data and converts it to domain enties before passing it back out.
Obviously, "Hello World" doesn't require a separated, modular front-end and back-end. But any sort of Enterprise-grade project does.
Assuming some sort of spectrum between these points, at which stage should an application be (conceptually, or at a design level) multi-layered? When a database, or some external resource is introduced? When you find that the you're anticipating spaghetti code in your methods/functions?
There is no real answer to this question. It depends largely on your application's needs, and numerous other factors. I'd suggest reading some books on design patterns and enterprise application architecture. These two are invaluable:
Design Patterns: Elements of Reusable Object-Oriented Software
Patterns of Enterprise Application Architecture
Some other books that I highly recommend are:
The Pragmatic Programmer: From Journeyman to Master
Refactoring: Improving the Design of Existing Code
No matter your skill level, reading these will really open your eyes to a world of possibilities.
Philosophical question here When it comes to OOP and databases, specifically, programs where the classes are backed up with databases, what is the best way to approach this? For instance, a team class might have a collection of players. The program would just load all of the team data from the database at startup, do all the manipulation in memory, and then write the database at close. Or is it better to write each data manipulation to the database as changes occur? And if this is the better way, why load the data into memory in the first place?
The other concern is that it seems to me that databases break standard OOP in one important way. With my team class with the collection of players, using OOP, the player class would not need to have a property to hold the team name. The player would get the team name from the team class of which it is a member. Now, to save the player in the database, each player record will have to have a column for the team name (or team id, but that's the same thing).
In other words, if you needed a GetAllPlayers() method, would you make it a member method in the team class, to return from memory all of the players in the collection, or make a static method in the player class to get all of the players from the database?
Anyone got any tips on how to answer these questions?
It's been a while since I have taken a programming class. Anyone know of a good text book that goes into understanding the best approach here?
Databases break object-orientation in a much more fundamental way. (Or objects break the relational model. It depends on whether you're a middle tier OO person or a DBA.)
Relational databases are set based by definition, declarative in nature. Object-oriented languages are object-instance based. Making the two work together is difficult because of "object-relational impedance mismatch". This is why you see so many ORM solutions (e.g. TopLink, Hibernate, etc.) All of them are trying to fool object-oriented programmers into thinking that they only need deal with objects and not worry about relational databases.
However you implement it, I think persistence should be separate from model objects. I usually put relational code in an interface-based data access layer. That way model objects don't have to know whether or not they're persisted, and I isolate the CRUD operations in a single package.
As for recommended reading, I'll offer Fowler's Patterns of Enterprise Application Architecture for your consideration.."
We have a class home work for design pattern class. In that we have to explain anyone design pattern used in any api/framework. I was thinking if I could do the same using Android apis. I know that android uses some command, observer, template method patterns and many more but it would be great if someone could point me to the starting reference document or so.
Thank you so much in advance.
Frameworks almost by definition tend to implement high-level patterns such as MVC or ORM patterns. These are not covered in the GOF text, although you will find them in other pattern books such as Martin Fowler's Patterns of Enterprise Application Architecture. Some GOF patterns are implemented at the framework or even language-level (like C# events/delegates as an example of the Observer pattern), but mostly GOF patterns are left to the individual developer to implement as needed, as the details tend to be application or domain-specific.
Android is the same way. It has a specific flavor of Model-View-Controller built in, but not too many GOF-specific patterns. You might consider the Activity lifecycle callbacks (onStart, onResume, etc.) as a kind of Observer pattern, although with only one dedicated subscriber.
Another example might be AsyncTask, which could be considered a species of the Command Pattern. I'll leave it to you to make the connection. It is homework after all.
What are the possible data access layer design-patterns for c# applications ?
Any advice will be useful. Thanks.
assume a data structure Person used for a contact database. The fields of the structure should be configurable, so that users can add user defined fields to the structure and even change existing fields. So basically there should be a configuration file like
FieldNo FieldName DataType DefaultValue 0 Name String "" 1 Age Integer "0" ...
The program should then load this file, manage the dynamic data structure (dynamic not in a "change during runtime" way, but in a "user can change via configuration file" way) and allow easy and type-safe access to the data fields.
I have already implemented this, storing information about each data field in a static array and storing only the changed values in the objects.
My question: Is there any pattern describing that situation? I guess that I'm not the first one running into the problem of creating a user-adjustable class?
Thanks in advance. Tell me if the question is not clear enough.
Consider the following situation. I have 3 tables in a resource planning project(created using mysql)
raw_materials (stores details about all raw materials)
procurement_orders (stores details about all details about requests sent to vendors for sending us the quotations. This table references the raw_materials table.)
quotations (contains details about all quotations sent by vendors to us. This table references the procurement_orders table).
I have created dbManagers for each of them using java which specialize in storing,retrieving,deleting data from respective tables.
My question is if i want to retrieve data which has to needs data from other tables, what is the best way to do it. eg:I want quotations of all raw_materials having stock below x. then according to me there are two ways
My question is what is the best way to design it in a proper oo way without hitting the performance.
This is a rather larger question than can easily be handled quickly, but aligning java classes to tables is not necessarily a good approach, precisely because as you note, some things naturally involve multiple tables in complex relations.
I'd recommend starting by looking at Martin Fowler's book Patterns of Enterprise Application Architecture.
There are also some notes on the patterns in this book at his website.
Your usage most nearly resembles Table Data Gateway. Following this pattern, it would be perfectly reasonable to have methods in each of your dbManager classes that retrieves data from its associated table but involves another table as part of a where clause.
You might also want to consider Object-relational mapping as implemented for instance by Hibernate.
Does anyone know of any design patterns for interfacing with relational databases? For instance, is it better to have SQL inline in your methods, or instantiate a SQL object where you pass data in and it builds the SQL statements? Do you have a static method to return the connection string and each method just gets that string and connects to the DB, performs its action, then disconnects as needed or do you have other structures that are in charge of connecting, executing, disconnecting, etc?
In otherwords, assuming the database already exists, what is the best way for OO applications to interact with it?
Thanks for any help.
I recommend the book Patterns of Enterprise Application Architecture by Martin Fowler for a thorough review of the most common answers to these questions.
Does the below code show an acceptable way to cache both fully built pages and database queries?
The caching of built pages is started with the
__construct in the controller and then finished with the
__destruct, in this example all pages are cached for a default of 15 minutes to a file.
The query caching is done with
apc and they are stored in memory for the specified amount of time per query. In the actual site there would be another class for the apc cache so that it could be changed if required.
My aim was to build the most simple possible mvc, have I failed or am I on the right sort of track?
Controller
//config //autoloader //initialiser - class controller { var $cacheUrl; function __construct(){ $cacheBuiltPage = new cache(); $this->cacheUrl = $cacheBuiltPage->startFullCache(); } function __destruct(){ $cacheBuiltPage = new cache(); $cacheBuiltPage->endFullCache($this->cacheUrl); } } class forumcontroller extends controller{ function buildForumThread(){ $threadOb = new thread(); $threadTitle = $threadOb->getTitle($data['id']); require 'thread.php'; } }
Model
class thread extends model{ public function getTitle($threadId){ $core = Connect::getInstance(); $data = $core->dbh->selectQuery("SELECT title FROM table WHERE id = 1"); return $data; } }
Database
class database { public $dbh; private static $dsn = "mysql:host=localhost;dbname="; private static $user = ""; private static $pass = ''; private static $instance; private function __construct () { $this->dbh = new PDO(self::$dsn, self::$user, self::$pass); } public static function getInstance(){ if(!isset(self::$instance)){ $object = __CLASS__; self::$instance = new $object; } return self::$instance; } public function selectQuery($sql, $time = 0) { $key = md5('query'.$sql); if(($data = apc_fetch($key)) === false) { $stmt = $this->dbh->query($sql); $data = $stmt->fetchAll(); apc_store($key, $data, $time); } return $data; } }
Cache
class cache{ var url; public function startFullCache(){ $this->url = 'cache/'.md5($_SERVER['PHP_SELF'].$_SERVER['QUERY_STRING']); if((@filesize($this->url) > 1) && (time() - filectime($this->url)) < (60 * 15)){ readfile($this->url); exit; } ob_start(); return $this->url; } public function endFullCache($cacheUrl){ $output = ob_get_contents(); ob_end_clean(); $output = sanitize_output($output); file_put_contents($cacheUrl, $output); echo $output; flush(); } }
View
<html> <head> <title><?=$threadTitle[0]?> Thread - Website</title> </head> <body> <h1><?=$threadTitle[0]?> Thread</h1> </body> </html>
First of all, you have to understand that caching of
GET request is usually done all over the internet. Especially if your user is connecting via some kind of proxy.
And then there is also ability to set long expire time, so that user's browser does the caching of the HTML page and/or the media files.
For starting to look into this, you should read these two articles.
Before you begin attempting to add cache, make sure that you actually need it. Do some benchmarking and look into what are your bottlenecks. Optimization for sake of optimizing is pointless and often harmful.
If you KNOW (and have data to back it up .. it's not about "feeling") that your application is making too many SQL queries, instead of jumping to query caching, you should begin by examining, what are those queries for.
For example:
If you see, that you are performing slow query each page-view just to generate a tag cloud, you instead should store the already completed tag cloud (as HTML fragment) and update it only, when something has changed.
Also, "add cache" should never be your first step, when trying to improve performance. If your queries are slow, use
EXPLAIN to see if they are using indexes properly. Make sure that you are not querying same data multiple times. And also see, if queries actually make sense.
I am not sure, where did you learn to write this way, but you seem to be missing the whole point of MVC:
You also seem to be missing the meaning for the word "layer". It is NOT a synonym for "class". Layers are groups of reusable components that are reusable in similar circumstances [1].
You might benefit from reading this and this post. They should help you understand the basic of this architectural pattern.
There are basically two points at which you can do the caching, when working with MVC (or MVC-inspired) architecture: views and persistence logic.
Caching for views would mostly entail reuse of once rendered templates (each view in MVC would be juggling multiple templates and associated UI logic). See the example with tag cloud earlier.
The caching in persistence logic would depend on your implementation.
You can cache the data, that is supposed to be passed to the domain objects, withing services:
Note: in real application the
newinstances would not be here. Instead you would be using some factories
$user = new User; $user->setId( 42 ); $cache = new Cache; if ( !$cache->fetch( $user )) { $mapper = new UserMappper; $mapper->fetch( $user ); } $user->setStatus( User::STATUS_BANNED ); $cache->store( $user ); $mapper->store( $user ); // User instance has been populated with data
The other point for cache would be Repositories and/or Identity maps, if you expand the persistence layer beyond use of simple mapper. That would be too hard explain in with a simple code example. Instead you should read Patterns of Enterprise Application Architecture book.
Please stop using singletons for establishing DB connection. It makes impossible to write unit tests and causes tight coupling to specific name of a class. I would recommend to instead use this approach and inject the DB connection in classes, that require it.
Also, if your query has no parameters, there is no point in preparing it. Prepared statement are for passing data to SQL .. but none of your example has any parameters.
This issue arises mostly because of your magical Database class. Instead you should separate the persistence logic in multiple data mappers. This way you would not be face with self-inflicted problem of having single method for all queries.
The
var keyword is an artifact of PHP4. Nowadays we use
public,
private and
protected.
You are hard coding the connection details in your code. It is basically a violation of OCP (the dumbed-down version: here).
I am in the process of integrating a number of legacy systems. They each have different databases; and I will need to write data access code for most of them.
The database schemas cannot be changed (I might be able to apply some indexes and such, but tables and their columns must retain the structure). Some of the databases has an OK design, with appropiate relationsships and primary / foreign keys, and some of the other databases lacks that very much.
Which ORM would you choose for this task ? I would like to use the same ORM accross the project; and my requirements are:
I currently have the most experience with LINQ-To-SQL; but I have a feeling it might be the wrong choice for this project. I am willing to invest some time in learning a new framework.
At a guess, I think an ORM might cause you more trouble than it saves. If you have several different legacy databases where some of them are poorly designed, you might find it easier to build the data access layer at a lower level than an ORM. Fowler's Patterns of Enterprise Application Architecture does quite a good job of cataloguing various approaches to structuring data access layers.
Some of the data access layer might be amenable to a code generation solution; however the presence of a variety of schemas (some messy as you say) suggests that a one-size-fits-all approach may not work, or may involve disproportionate effort to make it play nicely with all of the legacy databases.
I am not a pro in MySQL, but want to do something like Object Layer above relational MySQL tables.
I want to have very many "structures" with a fields of type "bigint", "longtext", "datetime", "double" stored in just 7 tables.
entity_types (et_id, et_name) - list of "structures";
entity_types_fields (etf_id, parent_et_id, ....., etf_ident, etf_type) - list of structure properties stored in one table for ALL structures; etf_type contains int value (0,1,2,3) which referenced to one of 4 tables described below.
entities (e_id, et_id) - list of all available entities (id and type id of entity)
and 4 data tables (containing all data for entities) -
entities_props_bigint (parent_e_id, parent_etf_id, ep_data) - for BIGINT data properties entities_props_longtext (parent_e_id, parent_etf_id, ep_data) - for LONGTEXT data properties entities_props_datetime (parent_e_id, parent_etf_id, ep_data) - for DATETIME data properties entities_props_double (parent_e_id, parent_etf_id, ep_data) - for DOUBLE data properties
What the best way to do selection from such data layer ?
Let I have list of e_id (id of entities), each entity can have any type. I want to get predefined list of properties. If some of entities don't have such property, I want to have it equal to NULL.
Do you have some info about how to do it ? May be you have some links or have already deal with such things.
Thanks!
You're reinventing the wheel by implementing a whole metadata system on top of a relational database. Many developers have tried to do what you're doing and then use SQL to query it, as if it is relational data. But implementing a system of non-relational data and metadata in SQL is harder than you expect.
I've changed the
relational tag of your question to
eav, because your design is a variation of the Entity-Attribute-Value design. There's a limit of five tags in Stack Overflow. But you should be aware that your design is not relational.
A relational design necessarily has a fixed set of attributes for all instances of an entity. The right way to represent this in a relational database is with columns of a table. This allows you to give a name and a data type to each attribute, and to ensure that the same set of names and their data types apply to every row of the table.
What the best way to do selection from such data layer ?
The only scalable way to query your design is to fetch the attribute data and metadata as rows, and reconstruct your object in application code.
SELECT e.e_id, f.etf_ident, f.etf_type, p0.ep_data AS data0, p1.ep_data AS data1, p2.ep_data AS data2, p3.ep_data AS data3 FROM entities AS e INNER JOIN entity_type_fields AS f ON e.et_id = f.parent_et_id LEFT OUTER JOIN entities_props_bigint AS p0 ON (p0.parent_e_id,p0.parent_etf_id) = (e.e_id,f.etf_id) LEFT OUTER JOIN entities_props_longtext AS p1 ON (p1.parent_e_id,p1.parent_etf_id) = (e.e_id,f.etf_id) LEFT OUTER JOIN entities_props_datetime AS p2 ON (p2.parent_e_id,p2.parent_etf_id) = (e.e_id,f.etf_id) LEFT OUTER JOIN entities_props_double AS p3 ON (p3.parent_e_id,p3.parent_etf_id) = (e.e_id,f.etf_id)
In the query above, each entity field should match at most one property, and the other data columns will be null. If all four data columns are null, then the entity field is missing.
Re your comment, okay now I understand better what you are trying to do. You have a collection of entity instances in a tree, but each instance may be a different type.
Here's how I would design it:
Store any attributes that all your entity subtypes have in common in a sort of super-type table.
entities(e_id,entity_type,name,date_created,creator,sku, etc.)
Store any attributes specific to an entity sub-type in their own table, as in Martin Fowler's Class Table Inheritance design.
entity_books(e_id,isbn,pages,publisher,volumes, etc.)
entity_videos(e_id,format,region,discs, etc.)
entity_socks(e_id,fabric,size,color, etc.)
Use the Closure Table design to model the hierarchy of objects.
entity_paths(ancestor_e_id, descendant_e_id, path_length)
For more information on Class Table Inheritance and Closure Table, see my presentations Practical Object-Oriented Models in SQL and Models for Hierarchical Data in SQL, or my book SQL Antipatterns: Avoiding the Pitfalls of Database Programming, or Martin Fowler's book Patterns of Enterprise Application Architecture.
?"
I'm a Java developer, but I believe that the language-agnostic answer is "yes".
Have a look at Martin Fowler's "Patterns Of Enterprise Application Architecture". I believe that technologies like LINQ were born for this.
I had to clean up code for an online college application. There's nothing particularly wrong with it, but it was complicated. Different degree programs had different prequisites, fees, required documentation, and questions. On top of that, students coming from the military get different fees, and previous students pay no fees and skip steps.
Obviously all this logic can get pretty complex - and cause bugs. I'm wondering if there's a design pattern or coding method that would help with organizing the logic. I'm using PHP, not that it matters.
The strategy pattern seems to have the most potential, but it seems to me I'd need strategies on top of strategies for this.
I imagine the field of "Business Logic" might cover this at least partially, but searches haven't turned up indications of any elegant coding methods to use.
I think a combination of patterns would be helpful. Fowler's Domain Model pattern aims to tame complex domain logic. Using a Layered architectural pattern is another option, as described in POSA1. Strategy pattern also seems to be a good idea for defining a family of related algorithms.
I have an MVC app I'm writing. There will be the need for multiple instances of the same page to be open, each accessing different records from a database, these record objects will also need to be passed through a flow of pages, before finally being updated.
What's the best, and most correct, way of acheiving this - should/can I create a custom model binder that links to an object via it's unique ID and then create each record-object in the session, updating them as I go through each one's page flow and then finally calling the update method? Or is there a better way of dealing with this?
Cheers
MH
Technically, that would be possible, but I don't think it is advisable. When you look at the signature of IModelBinder, you will have to jump through some hoops related to the ControllerContext if you want to be able to access the rest of your application's context (such as how to dehydrate objects based on IDs).
It's possible, but so clunky that you should consider whether it's the right approach. In my opinion, a ModelBinder's responsibility is to map HTTP request data to strongly typed objects. Nothing more and nothing less - it is strictly a mapper, and trying to make it do more would be breaking the Single Responsibility Principle.
It sounds to me like you need an Application Controller - basically, a class that orchestrates the Views and the state of the underlying Model. You can read more about the Application Controller design pattern in Patterns of Enterprise Application Architecture.
Since a web application is inherently stateless, you will need a place to store the intermediate state of the application. Whether you use sessions or a custom durable store to do that depends on the application's requirements and the general complexity of the intermediate data.
I have used to use PHP and MySQL a lot "back in the day" to create all kinds of websites including text-based games.
Back when I was creating these project I was using such code as:
$query = 'SELECT user FROM users WHERE user_id = 1'; $result = mysql_query($query);
To get results from the database. I realise that this is a very bad way to be doing it, my questions is what is the best way to now do SELECT, UPDATE, DELETE etc. As these will be used all over the website should I be making functions in one file and so on. I would like to know the "best/safest" way to do about doing this. I understand having SQL statements on one line is bad as they are open to SQL injections, I would like to know what this means exactly and how to get around this problem safely.
Its mainly handling the database that I seem to not understand. As I have not been paying attention to PHP for a few years now I see many things have moved on from when I had created my projects.
I have looked around the net and have found W3Schools to be a useful resource but I would like to hear it from people that are using this everyday to find out how I should be doing things.
Overall, how do I go about safely connecting a database and how can I grab data form the database safely for the whole website to use.
This includes:
And anything else that you can think of to help me understand how to structure a "safe" website.
Thanks to anyone that replies to this/these questions, I will be very active in comments for this asking more questions about things I do not full understand.
Side Note: this website will be created using HTML, JavaScript, PHP and using a MYSQL database.
It's all start with Separation of concerns and nowadays the most popular architecture for web applications is Model-View-Controller (you don't need to invent one, you may use some of the existing PHP frameworks, each of them is bundled with some kind of ORM).
Since you are asking about isolating the database code, the models is the first thing you should learn from the MVC paradigm. All operations on the data, including any business logic and calculations, all DB operations, go to the model classes. There are different way you may structure them. One popular pattern is the Active Record - literally a class per table. I'm not a big fan of this - you may create classes after your logical entities (user, game, etc), and they may operate on multiple tables. But those classes may be build upon Active Record anyway.
If you are working with existing code, you can start with isolating the queries in objects, so all code that work with the database is in one place - then you can start restructure objects to follow the chosen architecture : User objects works with the users table, and so on... After you separated the DB code in objects, you can easily switch to PDO or other db-implementations and protect from SQL injections.
In the end I'd like to recommend few books: Refactoring - the best book about how to turn legacy code in beautiful OO solutions
Patterns of enterprise application architecture - hard to read but has a lot of useful info
Implementation Patterns - unlike books on Design Patterns, this is focused on the small decisions, literally on each line of code
I'm asking all of you who aren't using a library for this, but are constructing your own objects for managing data flowing to and from your database tables. Do I have a recordset object? one object per row of data? Both? Neither? Any suggestions or experiences welcome. Please don't tell me to use an ORM or other such toolkit. It's overkill for my project and besides, that's not the question now, is it?
i'd strongly suggest picking up martin fowler's patterns of enterprise application architecture, it describes a number of database patterns that would be helpful to know, and give you an idea of the evolution of patterns to full on ORM libraries.
specific patterns you may be interested in:
these basic patterns will give you an idea of how to structure your objects, and the more advanced patterns (like active record/data mapper) you'll see how those relate to problem domains beyond where your needs are at the moment. toying with an application that is, roughly speaking, a sort of modeler application for the building industry. In the future I'd like it to be possible for the user to use both SI units and imperial. From what I understand, it's customary in the US building industry to use fractions of inches when specifying measurements, eg 3 1/2" - whereas in SI we'd write 3.5, not 3 1/2. I'm looking for a way to work with these different systems in my software - storing them, doing calculations on them etc, not only parsing what a users enters. It should be able to show the user a measurement in the way he entered it, yet being able to calculate with other measurements - for example add 3 cm to 1 1/2 inch. So if a user draws a length of wall of 5 feet and another one of 3 meters, the total measurement should be shown in the default unit system the user selected.
I'm undecided yet on how much flexibility I should add for entering data for the user; e.g. if he enters 1 foot 14 inches, should it should 2 feet 2 inches the next time the measurement is shown? However before I decide things like that, I'm looking for a way to store measurements in an exact form, which is what my question is about.
I'm using C++ and I've looked at Boost.Units, but that doesn't seem to offer a way to deal with fractions.
The simple option is to convert everything to millimeters, but rounding errors would make it impossible to go back to the exact measurement a user entered (if he entered it in imperial measurements). So I'll need something more complex.
For now I'm using a class that is tentatively named 'Distance' and looks conceptually like this:
class Distance { public: Distance(double value); // operators +, -, *, / Distance operator+(const Distance& that); ...etc... std::string StringForm(); // Returns a textual form of the value Distance operator=(double value); private: <question: what should go here?> }
This clearly shows where my problems are. The most obvious thing to do would be to have an enum that says whether this Distance is storing SI or imperial units, and have fields (doubles, presumably) that store the meters, centimeters and millimeters if it's in SI units and feet and inches if it's imperial. However this will make the implementation of the class littered with if(SI) else ..., and is very wasteful in memory. Plus I'd have to store a numerator and denominator for the feet and inches to be able to exactly store 1/3", for example.
So I'm looking for general design advice on how I should solve these problems, given my design requirements. Of course if there's a C++ library out there that already does these things, or a library in another language I could look at to copy concepts from, that would be great.
Take a look at Martin Fowler's Money pattern from Patterns of Enterprise Application Architecture - it is directly applicable to this situation. Recommended reading. Fowler has also posted a short writeup on his site of the Quantity pattern, a more generic version of Money.
I'm trying to decide if MSMQ is the right tool for communication between our application and a third party web service we are currently communicating with directly. We're looking to uncouple this such that if the service goes down, life could still go on as normal.
I can't find anything outside the usual MS fluff about it being the greatest thing and solving all your problems etc etc. It would be really useful if I could find some information that was somewhere between marketing fluff and API - like an architecture diagram of the components and how they integrate with each other, and more importantly how I integrate with them.
It's probably that I'm just looking for the information in the wrong places, so if someone could point me in the right direction, I'd appreciate it.
MSMQ is a implementation of a message queue as are websphere mq and a bunch of other systems. When looking into the concepts and high level architecture I would suggest reading up on message queue's and how they are applied in disconnected scenario's. I can highly recommend Patterns of Enterprise Application Architecture. For specific examples on msmq check out Pro MSMQ: Microsoft Message Queue Programming it doesn't contain allot of special information but it does group it allot better then most resources available on the internet. This Hello World with MSMQ article would give you a nice overview of what it entails and it's easily executed on a development system.'ve developed on the Yii Framework for a while now (4 months), and so far I have encountered some issues with MVC that I want to share with experienced developers out there. I'll present these issues by listing their levels of complexity.
[Level 1] CR(create update) form. First off, we have a lot of forms. Each form itself is a model, so each has some validation rules, some attributes, and some operations to perform on the attributes. In a lot of cases, each of these forms does both updating and creating records in the db using a single active record object.
-> So at this level of complexity, a form has to
when opened,
be able to display the db-friendly data from the db in a human-friendly way
be able to display all the form fields with the attributes of the active record object. Adding, removing, altering columns from the db table has to affect the display of the form.
when saves, be able to format the human-friendly data to db-friendly data before getting the data
when validates, be able to perform basic validations enforced by the active record object, it also has to perform other validations to fulfill some business rules.
when validating fails, be able to roll back changes made to the attribute as well as changes made to the db, and present the user with their originally entered data.
[Level 2] Extended CR form. A form that can perform creation/update of records from different tables at once. Not just that, whether a form would create/update of one of its records can sometimes depend on other conditions (more business rules), so a form can sometimes update records at table A,B but not D, and sometimes update records at A,D but not B -> So at this level of complexity, we see a form has to:
be able to satisfy [Level 1]
be able to conditionally create/update of certain records, conditionally create/update of certain columns of certain records.
[Level 3] The Tree of Models. The role of a form in an application is, in many ways, a port that let user's interact with your application. To satisfy requests, this port will interact with many other objects which, in turn, interact with many more objects. Some of these objects can be seen as models. Active Record is a model, but a Mailer can also be a model, so is a RobotArm. These models use one another to satisfy a user's request. Each model can perform their own operation and the whole tree has to be able to roll back any changes made in the case of error/failure.
Has anyone out there come across or been able to solve these problems?
I've come up with many stuffs like encapsulating model attributes in ModelAttribute objects to tackle their existence throughout tiers of client, server, and db.
I've also thought we should give the tree of models an Observer to observe and notify the observed models to rollback changes when errors occur. But what if multiple observers can exist, what if a node use its parent's observer but give its children another observers.
Engineers, developers, Rails, Yii, Zend, ASP, JavaEE, any MVC guys, please join this discussion for the sake of science.
--Update to teresko's response:---
@teresko I actually intended to incorporate the services into the execution inside a unit of work and have the Unit of work not worry about new/updated/deleted. Each object inside the unit of work will be responsible for its state and be required to implement their own commit() and rollback(). Once an error occur, the unit of work will rollback all changes from the newest registered object to the oldest registered object, since we're not only dealing with database, we can have mailers, publishers, etc. If otherwise, the tree executes successfully, we call commit() from the oldest registered object to the newest registered object. This way the mailer can save the mail and send it on commit.
Using data mapper is a great idea, but We still have to make sure columns in the db matches data mapper and domain object. Moreover, an extended CR form or a model that has its attributes depending on other models has to match their attributes in terms of validation and datatype. So maybe an attribute can be an object and shipped from model to model? An attribute can also tell if it's been modified, what validation should be performed on it, and how it can be human-friendly, application-friendly, and db-friendly. Any update to the db schema will affect this attribute, and, thereby throwing exceptions that requires developers to make changes to the system to satisfy this change.
The root of your problem is misuse of active record pattern. AR is meant for simple domain entities with only basic CRUD operations. When you start adding large amount of validation logic and relations between multiple tables, the pattern starts to break apart.
Active record, at its best, is a minor SRP violation, for the sake of simplicity. When you start piling on responsibilities, you start to incur severe penalties.
The best option is the separate the business and storage logic. Most often it is done by using domain object and data mappers:
Domain objects (in other materials also known as business object or domain model objects) deal with validation and specific business rules and are completely unaware of, how (or even "if") data in them was stored and retrieved. They also let you have object that are not directly bound to a storage structures (like DB tables).
For example: you might have a
LiveReport domain object, which represents current sales data. But it might have no specific table in DB. Instead it can be serviced by several mappers, that pool data from Memcache, SQL database and some external SOAP. And the
LiveReport instance's logic is completely unrelated to storage.
Data mappers know where to put the information from domain objects, but they do not any validation or data integrity checks. Thought they can be able to handle exceptions that cone from low level storage abstractions, like violation of
UNIQUE constraint.
Data mappers can also perform transaction, but, if a single transaction needs to be performed for multiple domain object, you should be looking to add Unit of Work (more about it lower).
In more advanced/complicated cases data mappers can interact and utilize DAOs and query builders. But this more for situation, when you aim to create an ORM-like functionality.
Each domain object can have multiple mappers, but each mapper should work only with specific class of domain objects (or a subclass of one, if your code adheres to LSP). You also should recognize that domain object and a collection of domain object are two separate things and should have separate mappers.
Also, each domain object can contain other domain objects, just like each data mapper can contain other mappers. But in case of mappers it is much more a matter of preference (I dislike it vehemently).
Another improvement, that could alleviate your current mess, would be to prevent application logic from leaking in the presentation layer (most often - controller). Instead you would largely benefit from using services, that contain the interaction between mappers and domain objects, thus creating a public-ish API for your model layer.
Basically, services you encapsulate complete segments of your model, that can (in real world - with minor effort and adjustments) be reused in different applications. For example:
Recognition,
Mailer or
DocumentLibrary would all services.
Also, I think I should not, that not all services have to contain domain object and mappers. A quite good example would be the previously mentioned
Mailer, which could be used either directly by controller, or (what's more likely) by another service.
If you stop using the active record pattern, this become quite simple problem: you need to make sure, that you save only data from those domain objects, which have actually changed since last save.
As I see it, there are two way to approach this:
Quick'n'Dirty
If something changed, just update it all ...
The way, that I prefer is to introduce a
checksum variable in the domain object, which holds a hash from all the domain object's variables (of course, with the exception of
checksum it self).
Each time the mapper is asked to save a domain object, it calls a method
isDirty() on this domain object, which checks, if data has changed. Then mapper can act accordingly. This also, with some adjustments, can be used for object graphs (if they are not to extensive, in which case you might need to refactor anyway).
Also, if your domain object actually gets mapped to several tables (or even different forms of storage), it might be reasonable to have several checksums, for each set of variables. Since mapper are already written for specific classes of domain object, it would not strengthen the existing coupling.
For PHP you will find some code examples in this ansewer.
Note: if your implementation is using DAOs to isolate domain objects from data mappers, then the logic of checksum based verification, would be moved to the DAO.
This is the "industry standard" for your problem and there is a whole chapter (11th) dealing with it in PoEAA book.
The basic idea is this, you create an instance, that acts like controller (in classical, not in MVC sense of the word) between you domain objects and data mappers.
Each time you alter or remove a domain object, you inform the Unit of Work about it. Each time you load data in a domain object, you ask Unit of Work to perform that task.
There are two ways to tell Unit of Work about the changes:
When all the interaction with domain object has been completed, you call
commit() method on the Unit of Work. It then finds the necessary mappers and store stores all the altered domain objects.
At this stage of complexity the only viable implementation is to use Unit of Work. It also would be responsible for initiating and committing the SQL transactions (if you are using SQL database), with the appropriate rollback clauses.
Read the "Patterns of Enterprise Application Architecture" book. It's what you desperately need. It also would correct the misconception about MVC and MVC-inspired design patters, that you have acquired by using Rails-like frameworks.!
The question is from the from the patterns of enterprise application architecture by Fowler.
My effort to enhance formula = d x r + c
But having a hard time about justifying it for the table module pattern growing exponentially as there is not much replication of definitions at that part.
Why does table module effort to enhance grow exponentially ?
References
Book
Domain Model
Table Module
Well, it's favourite diagram of Dino Esposito :o)
Mostly it's based on developer's experience and feelings. As for me, Domain Model is applicable for not many systems, for most of them less complicated patterns should be used. May be, it's you case. Well-designd table module application can have "liner" complexity for many and many years and this is ok. But if you feel, that you spend much time on doing the same job for different parts/classes/modules of your application, if you feel, you can't control it, if you have distributed team and 10-20 developers, you can think about separation of concerns, bounded context and domain model. So, this diagram is mostly marketing step to "sell" you DDD. I like DDD, but it really takes a lot of time at the beginning and there is a chance, that you never reach time, when DDD will be easier than any non DDD-way.
So, answering your question - no reason, just to tell that sometimes DDD is better.
We're using the DTO pattern to marshal our domain objects from the service layer into our repository, and then down to the database via NHibernate.
I've run into an issue whereby I pull a DTO out of the repository (e.g. CustomerDTO) and then convert it into the domain object (Customer) in my service layer. I then try and save a new object back (e.g. SalesOrder) which contains the same Customer object. This is in turn converted to a SalesOrderDTO (and CustomerDTO) for pushing into the repository.
NHibernate does not like this- it complains that the CustomerDTO is a duplicate record. I'm assuming that this is because it pulled out the first CustomerDTO in the same session and because the returning has been converted back and forth it cannot recognise this as the same object.
Am I stuck here or is there a way around this?
Thanks
James
As others have noted, implementing Equals and GetHashCode is a step in the right direction. Also look into NHibernate's support for the "attach" OR/M idiom.
You also have the nosetter.camelcase option at your disposal:
Furthermore, I'd like to encourage you not to be dissuaded by the lack of information out there online. It doesn't mean you're crazy, or doing things wrong. It just means you're working in an edge case. Unfortunately the biggest consumers of libraries like NHibernate are smallish in-house and/or web apps, where there exists the freedom to lean all your persistence needs against a single database. In reality, there are many exceptions to this rule.
For example, I'm currently working on a commercial desktop app where one of my domain objects has its data spread between a SQL CE database and image files on disk. Unfortunately NHibernate can only help me with the SQL CE persistence. I'm forced to use a sort of "Double Mapping" (see Martin Fowler's "Patterns of Enterprise Application Architecture") map my domain model through a repository layer that knows what data goes to NHibernate and what to disk.
It happens. It's a real need. Sometimes an apparent lack in a tool indicates you're taking a bad approach. But sometimes the truth is that you just truly are in an edge case, and need to build out some of these patterns for yourself to get it done.
What is the simplest way to use database persistence in Java? I know, many frameworks exists around the Internet, but it could be fun to learn how to develop a persistence layer by myself and its design patterns. Where to start? Books, websites, how-tos, code-examples, etc.
If you are looking for a learning practice then try to get a copy of Craig Larman's Applying UML and Patterns.
There Larman presents a chapter on lightweight database persistence mapper design. Unlike Hibernate, which is based on an unobtrusive persistence model, he presents an obtrusive framework in which domain objects has to be extended from a PersistentObject. We also have to write mapper classes for each persistent domain class. Its some sort of ActiveRecord pattern without any codegeneration concept.
I found this book to be particularly useful. This is a good one too.
Having created one myself I agree - it is a lot of fun, and a lot of work too. It all depends on your objectives.
Are there sample ASP.NET projects around using the patterns discussed in the book by Martin Fowler (Patterns of Enterprise Application Architecture)?
I have downloaded the Northwind starters kit and Dinner Now, which are very good. Are there others that use things like Unit of Work, Repository, ...
thx, Lieven Cardoen
dofactory.com as C# and VB.NET GoF pattern examples. They also have a full ASP.NET web application example detailing the use of the patterns, although I don't think that is a free download.
I am about to embark on a rewrite of a VB6 application in .NET 3.5sp1. The VB6 app is pretty well written and the data layer is completely based on stored procedures. I'd like to go with something automated like Linq2SQL/Entity Framework/NHibernate/SubSonic. Admittedly, I haven't used any of these tools in anything other than throwaway projects.
The potential problem I fear I might have with all these choices is speed. For instance, right now to retrieve a single row (or the entire list), I use the following sproc:
ALTER PROCEDURE [dbo].[lst_Customers] @intID INT = NULL ,@chvName VARCHAR(100) = NULL AS SELECT Customer_id, Name FROM dbo.Customer WHERE (@intID IS NULL OR @intID = Customer_id) AND (@chvName IS NULL OR Name like ('%' + @chvName + '%')) ORDER BY name
To retrieve a single row in Linq2SQL/Entity Framework/NHibernate/SubSonic, would these solutions have to bring the entire list down to the client and find the row that I need?
So, what's the consensus for the data access strategy for an application with a large data domain?
I'm going to play devil's advocate and recommend you at least consider sticking with the stored procedures. These represent a chunk of code that you do not have to re-write and debug. This article from our Very Own [tm] Joel Spolsky gives a coherent argument for avoiding complete re-writes.
Given a 'greenfield' project you can use what you want, and an O/R mapper might well be a good choice. However, you've already stated that the VB6 app is well written. If the sprocs are well written, then you get some of your app for free and it comes already debugged, plus you get to recycle the database schema and avoid most of the pain from data migration.
Fowler's Patterns of Enterprise Application Architecture should give you some good pointers for designing a data access layer that will play nicely with the stored procedures without causing maintenance issues.
This is done quite commonly on Oracle/Java apps. Many legacy Oracle apps have large bodies of stored procedure code in PL/SQL - this was a standard architecture back in the client-server days of Oracle Forms. It is common practice to write a wrapper for the sprocs in Java and build the user interface on top of the wrapper.
One of the other posters mentioned that Subsonic can generate wrappers for the sprocs.
Once upon a time I had occasion to do a data dictionary hack that generated a proof-of-concept Java/JDBC wrapper for PL/SQL sprocs - IIRC it only took a day or so. Given that it's not that hard to do, I'd be surprised to find that there isn't quite a bit of choice in things you can get off the shelf to do this. In a pinch, writing your own isn't all that hard either.
I am a programmer, and I think I'm well educated in OO. I believe in a POCO (C#) and a model that only has get/set methods to encapsulate data. 3 layer domain models.
I'm looking for documentation that support value for having a simple domain model and all business logic in the service layer and a DAL for data access.
Martin Fowler:
is saying that a (anaemic) domain model has no value, and for it to have value it must handle the buslogic or/and data CRUD operation. I need some good books that has some counterarguments for Martin Fowler. (this is not a case of dismissing Martin Fowler, I respect the work. I'm looking for a better understanding of what we are doing and why? )
You can find counterarguments from... Fowler himself.
PoEAA, p. 110, Transaction script :
However much of an object bigot you become, don't rule out Transaction Script. There are a lot of simple problems out there, and a simple solution will get you up and running much faster.
A Transaction Script is not exactly the kind of service you describe (it might not use domain objects, even anemic ones), but it's pretty close.
Also, note that the notion of POCO doesn't assume anything about the dumbness or anemic-ness of an object. You can have rich domain POCOs with behavior in them. POCO/POJO describes a simple native object, as opposed to an object decorated with annotations or attributes, or that inherits a special class from a framework, often for persistence purposes. have a requirement where I need to insert into database from the object which has information about table name and values for each column. Could you please give suggestions on the best way to implement this with Hibernate.
Java Classes
public class InsertDataDetails { public String tableName; public List ColumnDetails> columnDetails; } public class ColumnDetails { public String columnName; public String columnValue; }
At present I am thinking to create a native query and execute it with Session.execute query. However I am looking for better design where we can use DTO classes and hibernate features.
All of these proposals are far too relational-centric and simplistic. I'd recommend that you abandon this approach.
You should be thinking about objects in terms of the problem you're trying to solve, not persistence. If you want to use Hibernate, it can map your OO representation of the problem onto a relational schema.
I'd also recommend that you read Martin Fowler's Patterns of Enterprise Application Architecture to see how persistence problems are commonly solved in terms of objects.
Google for Data Access Object. Here's a simple generic one that's easy to implement with Hibernate and extend:
package persistence; public interface GenericDao<K, V> { V find(K id); List<V> find(); K insert(V value); void update(V value); void delete(V value); }
Persistence in terms of objects has been solved many times over, as best as it can be done. What you've proposed does not represent a meaningful improvement over the state of the art.
I have a third party C# library for ldap operations. It does all operations on connection object as below:
LdapConnection connection = new LdapConnetion(Settings settings); connection.Search(searchOU, filter,...);
which I feel is not readable. I want to write a wrapper around it so that I should be able to write code like below:
As I would like to have different Ldap classes like
public class AD: LdapServer { } public class OpenLdap: LdapServer { }
and then
AD myldap = new AD(Settings settings); myldap.Users.Search(searchOU, filter,...) myldap.Users.Add(searchOU, filter,...) myldap.Users.Delete(searchOU, filter,...)
I am thinking about Proxy design pattern, but things are not getting into my head about hot to go about it. What classes should I have etc.
Any help?
The solution posted above inherits from the LdapConnection. This is good if you want to maintain the inheritance chain, but I dont think that is necessary in your case. You simply want to customize and simplify the interface.
The proxy design pattern inherits from the underlying object so that the proxy object can be used anywhere that the underlying object is required, this is good if you want to "inject" extra functionality into the class without the clients of that class realising. I dont think this is your intention here?
The big problem with the solution posted above is that (because it inherits directly from LdapConnection) you can call search in two ways like so:
Settings settings = new Settings(); AD myAD = new AD(settings); object results = myAD.Users.Search(); // OR object results2 = myAD.Search();
As I'm sure you can see from the code, both of these call the exact same underlying method. But in my opinion, this is even more confusing to developers than just using the vanilla LdapConnection object. I would always be thinking "whats the difference between these seemingly identical methods??" Even worse, if you add some custom code inside the UsersWrapper Search method, you cannot always guarentee that it will be called. The possibility will always exist for a developer to call Search directly without going through the UsersWrapper.
Fowler in his book PoEAA defines a pattern called Gateway. This is a way to simplify and customize the interface to an external system or library.
public class AD { private LdapConnection ldapConn; private UsersWrapper users; public AD() { this.ldapConn = new LdapConnection(new Settings(/* configure settings here*/)); this.users = new UsersWrapper(this.ldapConn); } public UsersWrapper Users { get { return this.users; } } public class UsersWrapper { private LdapConnection ldapConn; public UsersWrapper(LdapConnection ldapConn) { this.ldapConn = ldapConn; } public object Search() { return this.ldapConn.Search(); } public void Add(object something) { this.ldapConn.Add(something); } public void Delete(object something) { this.ldapConn.Delete(something); } } }
This can then be used like so:
AD myAD = new AD(); object results = myAD.Users.Search();
Here you can see that the LdapConnection object is completly encapsulated inside the class and there is only one way to call each method. Even better, the setting up of the LdapConnection is also completely encapsulated. The code using this class doesn't have to worry about how to set it up. The settings are only defined in one place (in this class, instead of spread throughout your application).
The only disadvantage is that you loose the inheritance chain back to LdapConnection, but I dont think this is necessary in your case.
I have two classes
Parent and
Child.
class Child extends Parent { private String extraField1; private String extraField2; ... }
Child class has 2 extra fields
extraField1 and
extraField2.
Q1. Should I make two diff. tables in the databse: one for
Child and other for
Parent?
or
Q1. Should I add two columns in the
Parent table (each column for one extra field) and store the
Child in the
Parent table.
=============================== EDITED =======================================
Yes,
Child and
Parent are classes in the same hierarchy.
Patterns of Enterprice Application Architecture covers this as well in its chapters on Single-table inheritance, Class-table inheritance, and Concrete-table inheritance.
The coverage is similar to what Pascal has said. There's no One True Way, but the book does give you a good breakdown of costs and benefits, e.g.
The strengths of Concrete Table Inheritance are:
- Each table is self-contained and has no irrelevant fields. As a result it makes good sense when used by other applications that aren't using the objects.
- There are no joins to do when reading the data from the concrete mappers.
- Each table is accessed only when that class is accessed, which can spread the access load.
The weaknesses of Concrete Table Inheritance are:
- Primary keys can be difficult to handle.
- You can't enforce database relationships to abstract classes.
- If the fields on the domain classes are pushed up or down the hierarchy, you have to alter the table definitions. You don't have to do as much alteration as with Class Table Inheritance (285), but you can't ignore this as you can with Single Table Inheritance (278).
- If a superclass field changes, you need to change each table that has this field because the superclass fields are duplicated across the tables.
- A find on the superclass forces you to check all the tables, which leads to multiple database accesses (or a weird join).
I have used RIA service before, now testing Breeze Sharp.
RIA as well as Breeze give an impression that what you see on the server/middle tier is what you see on the client. To support that, the term Entity is being used on both the client and the server. Is it really an Entity, or it really a Presentation Model or Model on the client?
For smaller systems having one or two level entity graphs, there may not be wrong in thinking both the client and the server is the same. For larger systems with graphs going deep into five or six levels, the entities need to be converted to a DTO to make it simple. Unless the UI has some CRUD screens for entities, large applications end up with more DTOs and less entities. Most of the time, these DTOs will represent what UI wants and is equivalent to a presentation model.
Why can't we consider what we deal with at the client as presentation models rather than entities?
You are free to call the client-side entity class whatever you like :-)
More seriously, let's get at the typical reasoning behind the claim that this is an anti-pattern.
I want to be super clear about this. Breeze is designed for rich web client applications. A Breeze client is not a presentation layer; it has a presentation layer. It also has its own business model and data access layers.
The terms "entity" and "DTO" mean different things to different people. I like Evan's DDD definition for "entity" and Fowler's definition for "DTO" in PoEAA.
Breeze client entities qualify as Evans entities: "Objects that have a distinct identity that runs through time and different representations. You also hear these called 'reference objects'" [Fowler]. Breeze entities aren't just property bags; they have business logic as well and you can extend them with more of your own.
Breeze entities are not "presentation models". They are independent of any particular UI representation and typically do not implement presentation concerns.
They are designed such that they can be bound directly to visual controls. That's a Breeze productivity design decision ... a decision about how we implement entities. Some people - the people who think entity properties are an anti-pattern - will hate that. Evans is silent on that question. Fowler poo-poos it. If it offends you, you may not like Breeze. Move along.
I'm about to argue that this is a false dichotomy.
People often say "it's an anti-pattern to send entities over the wire. Always send DTOs". There is sound reasoning behind this poorly worded edict. When a client and server entity class are identical, you've coupled the server's implementation to the client's implementation. If the model changes on the server, it must change on the client and vice-versa even if the change is only relevant on one of the tiers. That can interfere with your ability to evolve the server and client code independently. We may accept that coupling as a matter of expedience (and expedience matters!), but no one wants it.
A Breeze client entity class does not have to be the same, neither in shape nor in business logic, as the server entity class. When you query in Breeze, you put entity data on the wire and transform it into client entities; when you save, you put client entity data on the wire and transform it on the server into server entities. DTOs may be involved in either direction. The important fact is that the classes can be different.
They are conceptually related of course. You'll have a devil of a time transforming the data between the two representations if the meaning of the
Customer entity diverges widely on the two sides. That's true with or without explicit DTOs.
Let's acknowledge as well that it is easier to transform the data in both directions when the classes are actually the same. You pay a mapping tax when they are different and you may lose the ability to compose Breeze LINQ queries on the client. You can pay the tax if you wish. Breeze doesn't care.
My inclination is to start with the same classes on both sides and change them when and as necessary. That has worked well for a high percentage of classes in RIA Services and DevForce. Most importantly, it has never been difficult for me to re-factor to separate classes when the need arose.
<rant> The worrywarts exaggerate the risks of sharing class definitions and understate the cost of mapping layers whose benefits are rarely realized in practice during the lifetime of the application.</rant>
You wrote:
For larger systems with graphs going deep into five or six levels, the entities need to be converted to a DTO to make it simple. ... Most of the time, these DTOs will represent what UI wants and is equivalent to a presentation model
In my experience that is only true if you assume that your client simply pastes entities to the screen. But I have already stipulated that the client is an application, not the presentation layer.
I further argue that you need a domain model on the client for the same reason that you need one on the server: to reason about the domain. You do this independently of the presentation. I assume that your entities will appear in some fashion on multiple screens subject to differing presentation rules. It's the same model, presented many ways. We call this "pivoting around the data".
No matter how many faces you put on the model, the underlying model data and the business rules that govern them should stay the same. That's what makes it a "Domain Model" and not a "Presentation Model."
FWIW, I always have a "Presentation Model" (AKA "ViewModel") in my apps to orchestrate the activities of a View. So I don't ask myself "PM or Model?". Rather I choose either to data bind visual controls directly to model entities that I expose through the VM's api or I bind them instead to an intermediate "Item Presentation Model" (AKA "Item ViewModel") that wraps some entities. Which way I go is an application decision. In practice, I start by binding directly to the entities and refactor to an "Item ViewModel" when and as needed.
In either case, I will construct the PMs (VMs) that I need on the client. If I need an "Item ViewModel", I create that on the client too. I do not ask my server to prepare DTOs for my client to display. To me that is an anti-pattern because it couples the server to the client.
How? If the developer needs to change a screen on the client, she may have to wait for someone to provide the supporting server endpoint and DTO. Now we have to coordinate the release schedules of the server and client even though the impetus for the change was a client requirement, not a server requirement.
It's actually worse than that. Some server-side developer has to stop what she's doing and add a new service method to satisfy a client requirement. That wasn't one of her requirements ... but it is now. Over time the service API expands enormously and soon it is full of look-a-like members that do apparently the same job in slightly different ways.
Eventually we forget who is using which method and what for. No one dares change an existing method for fear of breaking an unknown client. So the dev copies something that kind of looks right, make it a little different, and calls it something else. This pattern of service API pollution should sound familiar to anyone who has worked with enterprise applications.
Every seeming "rule" is meant to be broken. Of course there are occasions when it is both expedient and efficient to let the server prepare data for display. This happens most frequently with high volume, read-only data that summarize an even larger volume of complex data on the Data Tier. When I go this route, I'm typically motivated by performance considerations. Otherwise, I stay true to the entity-oriented architecture.
When it looks like everything in my app conforms to the exception, I conclude that I've got the wrong architecture for this particular application ... and this shouldn't be a Breeze app. I don't know if this is your case or not.
Hope this helps.
I'm just getting into how to write a good architecture of a good software system, and I'm learning how to separate high level components into Layers. In this case, I'm trying to use Tiers, so as to model each Layer as a black box.
There are 4 tiers in my architecture: Presentation, Application Services, Business Logic, and Domain/Persistence. For the purposes of my question, we really only need to focus on the Presentation and Application Services.
The Application Services Layer will contain a Service that allows tracking of a certain event. The Presentation will have several views that should update dynamically as the tracking model of the events change. Inherently, it seems that I need a one-way change-propagation mechanism.
Since I'm trying to model these Layers as Tiers, I'd like to restrict communication between Facade objects for each Tier, and when necessary, allow a Tier to aggregate an object from one Tier lower, though only known by interface.
I'm programming this application in Java, so the obvious thing to use is Observable/Observer. However, I don't like that the update method for the Observer interface forces you to cast object arguments. I want to work around this by defining my own interface and class for this mechanism. The problem, then, is that the Application Logic will depend on an interface from the Presentation Tier, a certain no-no for this architecture. Is this a sign that I should try modeling with MVC fore-most and Layer the Model? Or would it be a better idea to model each of the views with an interface known in the Application Services Layer. It seems like a bad place to put it, and I'm stuck. Also, I'm using the View-Handler design pattern to handle the multiple views.
It seems to me that your question is less about Publish/Subscribe than it is how to get the layers to communicate.
Short answer:
Use MVC/MVP. Look up blog posts about them, download source code, and remember: if all you have is a hammer, everything looks like a nail. Meaning don't apply patterns because you have them, apply them because you need them.
Long answer:
If you're working in Java, I suggest Head First Design Patterns which will get you oriented to the way of thinking in patterns. After you have your head around design patterns, which I think you're on your way to now, you can look at Patterns of Enterprise Application Architecture. Feel free to skip Head First, but it is a very good book that I highly recommend if you're getting into architecture.
Once you've digested the Fowler book, or at least have a basic understanding of N-Tiered Enterprise Architecture, you should be well on your way.
I've been getting to grips with MVC (in PHP) by way of Zend. My understanding of the Zend Framework is that each type of user request maps to a specific controller (which in turn may or may not map to a model), and each action maps to a view. I've noticed the same pattern in Codeigniter and Kohana, and to some extent also in Symfony. Effectively, the URL maps thus: parameters...
Is this always the case with MVC? In what way is this different from Page Controller as a design pattern?
Zend Framework uses Two Step View. It's very similar to MVC. As you can see, theres not so much correspondence between the architecture and the url mapping.
If you want to learn about likely architectures, read PoEAA by Martin Fowler.
i want to learn entity framework. i started with some EF tutorials and i also know little about linq to sql.
i want to learn through a pet project. project should be in three layers. web forms (prez), data layer(c# lib), bussiness layer(c# lib). Project can be any, any functionality. just want to learn how to use EF in diff. layers and in UI.
can anyone guide me how do i start to do layering? help me to learn how should i use EF objects from DAL layer to BL and then UI.
I am confused as all tutorials shows direct binding of EF to EF datasource in UI and with controls.
thanks a lot.
A couple of things I would recommend:
Rob Conery (with occasional guests) put together a video series on building a storefront site using ASP.NET MVC. He used LINQ-to-SQL, not Entity Framework, but I don't think the difference is significant to what you are interested in learning. One nice thing about this series is that he walks you through the various design decisions he makes, and even backtracks when he later feels that one of them was wrong. Another is that, in addition to MVC and LINQ-to-SQL, he also explores some other development concepts such as inversion of control and test-driven development.
Martin Fowler's book Patterns of Enterprise Application Architecture is a great resource for this sort of thing. He lays out the different patterns that are available in each tier of your application and discusses how to use them and when each is appropriate. Fowler's writing style is friendly and easy to read, and a lot of the patterns in his book are prominent in the vernacular of the software development world today (e.g. Repository pattern, Active Record, Unit of Work).
Hope this helps!
I would like to know a good introducing book for Repository Pattern using Entity Framework. Do you know any? thanks
Read the book about Entity Framework, relevant parts from the book about enterprise application patterns and the book about domain driven design. You must first understand every single part to use it correctly.
Once you understand topics answer few questions:
Until you don't know at least expected answers for these questions you don't need to bother with repository pattern. The main rule of pattern usage: A pattern should be used when it is needed not because it exists. The boom of all these repository articles goes against this rule and moreover most of these articles are wrong using either wrong code (like passing
Func<> to queries instead of
Expression<Func<>>) or bad assumptions (like saying that using repository will make your code unit testable).
Let's say I have a POJO class
Meal. This class is mapped with an ORM (e.g. JPA + Hibernate), so I can persist it in a DB. Among other things, this class contains a
List<Dish> dishes (Dish being another mapped POJO) that is lazily loaded by the ORM.
Now I have a business layer method
Meal getNextDueMeal(). This is invoked by the UI layer to then display the meal to the user. Of course, the dishes that make up the meal should also be displayed.
But how should I deal with this? If I try to iterate over the list returned by
getMeals() naively, I would get a
LazyInitializationException. I could maintain an
EntityManger in the UI layer e.g. by using Spring's
@Transactional annotation. But then the object returned from the business logic would stay persistent, i.e. if I somehow modify the
Meal-"POJO" in the UI, it will automatically get saved once I return from the
@Transactional-method, which may not be what I want.
tl;dr: Should the business layer return persistent objects to the UI-Layer? And if not, how do I deal with lazy loading?
When you're working with a remote interface, it's not a good idea to return your entities as your Business Layer return values. You can define some DTO or Data Transfer Objects and populate them from fetched entities and return those DTOs as the return value.
Should the business layer return persistent objects to the UI-Layer? And if not, how do I deal with lazy loading?
About Lazy Loading, you can populate all the required values in the business layer into the DTO object, hence, in your UI layer all the required attributes are loaded and you won't encounter those
LazyInitializationException exceptions.
The fields in a Data Transfer Object are fairly simple, being primitives, simple classes like
Strings and
Dates, or other Data Transfer Objects. Any structure between data transfer objects should be a simple graph structure—normally a hierarchy—as opposed to the more complicated graph structures that you see in a Entity.
In your case, you probably would have a
MealDto and a
DishDto, something like this:
public class MealDto { private String name; private List<DishDto> dishes; // getters and setters }
You can use another abstractions responsible for assembling the DTO from the corresponding Entity. For example, you can use a
MealAssembler:
public class MealAssembler { public MealDto toDto(Meal meal) { MealDto dto = new MealDto(); dto.setName(meal.getName); // populate the other stuff return dto; } }
New to PHP & OOP so bear with me... I'm in the optimistic early stages of designing and writing my first PHP & OOP website after a lifetime of writing crappy M$ VBA rubbish.
I have a class "User" which has a save method with the relevant database calls etc... (actually I have DB and DBUtils classes that handles the connection and CRUD - my business classes just call select, update, delete methods on DB Utils and pass associative arrays of data)
Anywho... My "User" class is extended by an "Admin" class which has some additional properties over and above "User" yada yada...
What is the best way to deal with the save method on "Admin"? I understand that if I add a save method to the Admin class it will supercede the one on User but I don't want it to. I want to write the save method on Admin to only deal with the properties etc that are specific to the Admin objects and for the properties inherited from "User" to be dealt with in the User save method.
Does that make sense? Is it a specific OOP pattern I'm looking for? Any help or guidance with how I should design and structure this code would be appreciated.
EDIT: Whoa! Thanks for all the answers below. Not sure which is my preferred yet. I will have to do some playing around...
You main issue stems from fact that you have been ignoring one of core ideas in OOP: single responsibility principle .. then again, it seems like everyone who provided "answers" have no ideas what SRP is either.
What you refer to as "business logic" should be kept separate from storage related operation. Neither the instance of
User no
Admin should be aware oh how the storage is performed, because it is not business logic.
$entity = new Admin; $mapper = new AdminMapper( $db ); $entity->setName('Wintermute'); $mapper->fetch( $entity ); //retrieve data for admin with that name $entity->setName('Neuromancer'); $mapper->store( $entity ); // rename and save
What you see above is an extremely simplified application of data mapper pattern. The goal of this pattern is to separate business and storage logic.
If, instead of instantiating the mapper directly
$mapper = new AdminMapper( $db ), it would be provided by an injected factory
$mapper = $this->mapperFactory->build('Admin') (as it should be in proper codebase), you would have no indication about the storage medium. The data could be stored either in an SQL DB or file, or some remote REST API. If interface for mapper stays the same, you can replace it whenever you need.
The use of factory would let you avoid tight coupling to specific class names and, in case of mappers for SQL databases, let you inject in every instance same DB connection.
To learn a bit more about it, you could read this, this and this.
But, if you are seriously thinking about studying OOP, then reading this book is mandatory.
So I've been looking up more tutorials and articles about the MVC design pattern to deepen my understanding of them and I'm starting to doubt if I have been doing it all wrong. So yeah the goal of those patterns are to make code reusable and to minimize repeated code, correct?
I've been seeing various ways of how those design patterns are explained which are confusing me a bit. In the past I thought it was the controller-as-a-mediator-between-model-and-view way, but now I'm learning that that was wrong and that the view is actually much more than just a template. Then I also read somewhere (I think here) that in a true MVC pattern, there is by definition only one model and all the other "models" are just different facets of the single model. Is this the case? Is this the best way to separate code and make it reusable? How would that look like in code? And again somewhere else I read that for web-applications it is better to stick to the MVVM pattern.
So now I'm a bit confused. What IS the most effective pattern to separate concerns in a web-application and make the code reusable? I would prefer to not only see a description of this pattern, but also a short example in code so I understand it better.
So yeah the goal of those patterns are to make code reusable and to minimize repeated code, correct?
Is this the best way to separate code and make it reusable?
Nope. It really isn't.
The goal in MVC and MVC-inspired patterns is to isolate business logic (model layer) from the user interface. And within the UI - to divide it into managing input and output.
Basically, MVC and MVC-inspired patterns are arcitectural design patterns that implement principle know as Separation of Concerns.
Then I also read somewhere (I think here) that in a true MVC pattern, there is by definition only one model and all the other "models" are just different facets of the single model. Is this the case?
No. Model is not a "thing" (class, object). It is a layer. Same way as you could say that all of controllers and views are contained in presentation layer or UI layer.
What people refer to as "models" are usually either domain objects or (much worse case) some form of active record implementation.
And again somewhere else I read that for web-applications it is better to stick to the MVVM pattern.
MVVM pattern add another layer between views and the model layer. It's best used for situations, when you either cannot controller the implementation of the views or/and the model layer.
Most of people confuse it with use of presentation object (M.Fowler has this nasty habit of adding "model" to everything, which create boundless confusions) concept. Presentation objects are meant to isolate reusable parts of UI logic.
How would that look like in code?
MVC and MVC-inspired patterns are created to manage large codebases. You apply MVC patter when simple use of object oriented practices are not enough to make code understandable. You do it by adding additional constraints to your codebase. MVC does not add anything new to your application. Instead it restricts where and what code can be written.
And "sort code example" will not actually illustrate the pattern. To actually understand, how it works, you should read something like PoEAA book .. or something similar.
What IS the most effective pattern to separate concerns in a web-application and make the code reusable?
It's a matter of opinion.
The official website seems only provided some tutorial on how to use:
But I want to know why the directories should be setup like this
Symfony is based on a lot of patterns, this blog post highlights a few of them:
Basically, it's a model-view-controller (MVC) framework and the directories are setup like that only to organize the numerous configuration and PHP files that are created during a normal, structured project. Of course, you'd need to be a little bit more specific on what makes you curious about the directory structure but if you go through the documentation, you'll find interesting facts about how your project will be divided. The directory structure is not necessarily related to a specific design pattern itself (the code is) but probably more related to just getting your files organized.
Anyway, Symfony is a PHP framework like many others, and just by going through generic design patterns using classic books such as Patterns of Enterprise Application Architecture or websites about PHP patterns recipes, popular patterns or extensive lists, you should get a good idea on the general structure.
We are starting a new Java EE project and am looking for suggestions regarding design patterns to expose the backend interfaces.
It would be preferable if the backend logic can run seamlessly on Tomcat, other Java EE 5 containers. We are currently veering towards Java EE 5, EJB3, JPA and the preferred deployment platform is JBoss 5 AS. We would expect the backend engine to expose the following interfaces (e.g. EJB3 local / remote, SOAP, REST) for the business logic. Any pointers regarding suitable design patterns which can be used as for as code layout and package structure is concerned, which will enable us to package and deploy them on the above mentioned containers.
These are some books that cover the topic:
What are good priciples for creating a scalable website predominantly C#? What design patterns are more common for a C# based website? Links to good books or articles are welcome.
Martin Fowler's Patterns of Enterprise Application Architecture (summaries on his website) are a good place to start. There's an awful, awful lot of components and technologies that can go into building a scalable website... load balancing, caching, application serving, database setup, networking, and somewhere in there is the actual code being written.
The MVC pattern itself does not describe how should you implement your web application. What it describes is how your components should interact with each other in order to achieve a modular architecture with replaceable components.
The pattern is explained in details in Martin Fowler's POEAA and in Wikipedia. More info about MVC can be found in Wikipedia
A simple example using Java, Spring and Hibernate
In this case Spring MVC provides a pluggable framework where you can define your models , controllers and views without coupling them together too tightly (this is achieved through IOC/DI).
The first thing to notice is the DispatcherServlet which is a regular servlet that serves as an entry point by handling all the incoming HTTP requests and routing them to their respective controllers. The appropriate controller is looked up by their mappings, eg. via @RequestMapping annotations.
The controller's responsibility is to determine what actions should be performed as a response to the incoming request. This is usually done by checking headers, parameters, session info, path for information what the user wanted to do. Here is an extremely simple example:
if (session.getAttribute("authenticated") == false) { // we need to redirect to the login page } else { // everything was fine, so we do some business logic in the model importantService.doSomethingReallyImportant(productOrder) }
Then the control is passed to the model layer where business logic happens. This may include any operation that changes the model's state, like updating a password, registering a booking, clearing transactions, and so on. In web applications these operations often involve the use of a persistence API, eg. Hibernate.
public class ImportantService { public void doSomethingVeryImportant(final ProductOrder order) { // Here we define the business operation getCurrentBasket().add(order); // An additional pseudo-persistence operation getSession().update(order); } }
In practice when the model has finished then control is returned to the controller which decides how to update the view (eg. redirecting the browser or just simply display a result page) where the user sees the result of his/her action.
A lot of applications with a GUI can be seen as handling a collection of objects (probably at several levels). For instance, a list of contacts or a set of documents. Moreover, maybe there exists the concept of "current object" (current contact, current document or the like) in the application and some GUI controls make actions happen on this current object (and not on other objects in the collection). Obviously, the GUI should offer a way for selecting a different object as "the new current one" before applying new actions on it. I think it is a quite general situation, so maybe there is a quite general solution to where to place such a concept (for instance, an integer index on a list) in the MVC pattern.
I feel it should be out of the Model (I can think of an application with several View/Controller pairs sharing one Model and where each View has its own opinion on which object is the selected or current one), but I have not been able to confirm it by "googling".
I would like to know pointers to authors discussing this subject. Moreover, your opinion is welcome (if such kind of discussion is allowed in this forum). Thanks.
Disclaimer: my primary language is PHP, and only have experience with MVC-related patterns in the context of web (mostly with the Model2 variant of it, because of the obvious limitations of web itself), which has shaped my understanding of MVC structure.
I see the concept of Current Object as an aspect of Model Layer's state. Current object should not directly exposed to other parts of MVC triad. Both the controller(s) and view(s) have only access to it via the higher/public part (I tend to call that part "Services", but it's a bad name) of the model layer.
This lets you freely change, manipulate and swap objects, which you have marked as "current". At the same time other parts of MVC are not directly affected.
As for the materials on subject, I have not really seen any articles/books dealing exclusively with this subject. The best I can suggest is reading Patterns of Enterprise Application Architecture .. again.
At the moment, I've got a fat controller and a thinner model layer.
My controller looks something like this.
namespace controller; class home { public $template = 'home'; protected $database; public function __construct(\Zend\Db\Adapter\Adapter $database){ $this->database = $database; } /** * Returns the home page */ public function indexView(){ $userService = new UserService($this->database); $view = new ViewModel($this->template); $view->assign('pageTitle', 'Home'); $view->assign('lead', "Welcome ".$userService->getFirstName()); $view->assign('h1', 'Home'); } }
My model would consist of data manipulation, data gathering etc.
The viewModel class this calls, the view, is basically a container class which includes the header, footer and the actual template used inside.
In terms of MVC, I now understand that the Model and View are aware of each other.
Am I going about this the right way?
Short answer: no, you are not doing it the right way, or even slightly correct-ish way.
The MVC design pattern is all about separation of concerns. You separate the model layer from presentation layer, to split the domain business logic from how it is demonstrated.
In presentation layer you split the components that are responsible for user interaction (controllers) from those that govern the UI creation (views).
The model layer too is subject to some separation, though that usually is not covered in "mvc for beginners by beginners" blog posts. I wrote a short description in an earlier post. But if you want to actually understand how to implement model layer, you will have to read Folwer's PoEAA.
In classical MVC (which you cannot use for web) and Model2 MVC patterns the view request all the data that i need from model layer. Controller only changes state of model layer and current view by applying user input the them.
In the simplest implementations of other MVC-inspired design patterns (MVVM and MVP) the controller-like structures (ViewModel and Presenter - respectively) provide view with the data from model layer. That also means that having a viewmodel inside a controller makes no sense whatsoever. You can read more about MVP pattern in this publication.
P.S. also, why are you injecting DB connection in the controller when all you will do with it is to pass it along? That code fragment is violating LoD. If you need to acquire structures from model layer inside a controller, you should be injecting a factory instead..
I am about to create a project where I want to have a class that connects my application to a database.
I want to do this in the best object-orientated way, following the Solid principles!
My question to you is:
Is it ever smart to divide your Provider into subclasses, for example a subclass that gets information from the database and a subclass that can insert data into the database? Or do you keep these functionalities in one huge class?
I'd recommend that you take a look at Martin Fowler's Patterns of Enterprise Application Architecture. He's got a nice chapter on patterns for persistence.
This problem has been solved many, many times: ORM solutions like JPA and Hibernate, iBatis mapping, Spring JDBC. I can't imagine how you'll improve on what's been done before. If you can't articulate what's different, I'd recommend using something that's been written, tested, and proven before investing in something new.
If you must, I'd recommend a generic DAO. Here's a very simple one:
package persistence; public interface GenericDao<K, V> { V find(K key); List<V> find(); K save(V value); void update(V value); void delete(V value); }
I am a database / SQL / ADO.NET newbie, and I'm trying to build a solid foundation before I try to learn about ORM frameworks like NHibernate or Entity Framework. I have been reading up about platform-agnostic RDBMS concepts, and now I'm trying to adapt that to writing clean and reusable C# code that actually interacts with a database. I've heard of Martin Fowler's Patterns of Enterprise Application Architecture which has been recommended here before, but the "Enterprise" in the title makes me wonder if there's not some intermediate learning step I need to make before I'm ready to read that book.
As practice, I'm making a simple ASP.NET web site that helps visitors look up certain information from a database that has a few thousand records. It's a painfully simple first project--there are no user accounts or logins of any kind, and the site would get about a dozen hits per day.
My ASP.NET code features a few methods like this to retrieve data from the database:
string dbConnectString = "..."; public List<string> GetItems() { List<string> rList = null; using (SqlConnection myConn = new SqlConnection(dbConnectString)) { string queryStatement = "SELECT Name FROM) { // This loop should probably be a LINQ expression rList = new List<string>(); foreach (DataRow row in resultTable.Rows) { rList.Add((string)row["Name"]); } } myConn.Close(); } } } return rList; }
I think I've done the right thing as far as handling the objects that implement IDisposable. But is there a better way to write the method itself?
Here are some questions that I have:
From what I've read, it seems that connection pooling makes it OK to instantiate a new SqlConnection with each call to methods like GetItems(). Is there a "cleaner" way to write the code, so I still do proper resource management, but don't have so much repetition in blocks like
using (SqlConnection ...)? How about this approach:
public class DatabaseManager : IDisposable { protected SqlConnection myConn; public DatabaseManager(string connectionString) { // Set up the constructor myConn = new SqlConnection(dbConnectString); } // IDisposable implementation stuff goes here. Any usage of // DatabaseManager would have to take place in a using() block. public List<string> GetItems() { List<string> rList = null; string queryStatement = "SELECT Name FROM dbo) { rList = new List<string>(); foreach (DataRow row in resultTable.Rows) { rList.Add((string)row["Name"]); } } myConn.Close(); } } return rList; } }
Aside from my shameless use of vertical spacing, is there a more elegant way to extract the List from the result of the database query? One way would be to replace the foreach() iteration with a LINQ call, but that would only save two or three lines of code.
Thanks guys!
Your code is neither clean nor reusable:
To fix all these issues you need a lot of time and a lot of clever ideas. You have issues with a single manager with a single select query. How about dozens of managers, selects, updates, inserts, deletes with joins to other managers and all this with proper transaction scopes and error handling?
This is why people stick with ORMs when possible and do not reinvent wheels. Writing a good, reusable data access layer is not easy and although you could possibly have a working solution for one or two simple cases, this would not scale well because of reasons I've mentioned. Sooner or later you'll end up with a large pile of pure mess and after two or three unsuccesfull aproaches you would end up reinventing ORMs. creating a database for a web application and am looking for some suggestions to model a single entity that might have multiple types, with each type having differing attributes.
As an example assume that I want to create a relational model for a "Data Source" object. There will be some shared attributes of all data sources, such as a numerical identifier, a name, and a type. Each type will then have differing attributes based on the type. For the sake of argument let's say we have two types, "SFTP" and "S3".
For the S3 type we might have to store the bucket, AWSAccessKeyId, YourSecretAccessKeyID, etc. For SFTP we would have to store the address, username, password, potentially a key of some sort.
My first inclination would be to break out each type into their own table with any non-common fields being represented in that new table with a foreign key in the main "Data Source" table. What I don't like about that is that I would then have to know which table is associated with each type that is stored in the main table and rewrite the queries coming from the web app dynamically based on that type.
Is there a simple solution or best practices I'm missing here?
What you are describing is a situation where you want to implement table inheritance. There are three methods for doing this, all described in Martin Fowler's excellent book, Patterns of Enterprise Application Architecture.
What you describe as your first inclination is called Class Table Inheritance by Fowler. It is the method that I tend to use in my database designs, but doesn't always fit well. This method corresponds most closely to an OO view of the database, with a table representing an abstract class and other tables representing concrete implementations of the abstract class. Data must be queried and updated from multiple tables.
It sounds like what you actually want to use is called Single Table Inheritance by Fowler. In this method, you'd actually put columns for all of your data in one table, with a discriminator column to identify which fields are associated with the element type. Queries are generally simpler, although you do have to deal with the discriminator column.
Finally, the third type is called Concrete Table Inheritance by Fowler. In my mind, this is the least useful. In this method, you give up all concepts of having any kind of hierarchical data, and create a single table for each element type. Still, there are times when this might work for you.
All three methods have their pros and cons. You should consult the links above to see which might work best for you in your project.
I am looking to write some webservices.
What defines one unit of "service". I note that as apart of one project you can have multiple .svc service files.
How do you normally segment your services? For example, a banking app:
Would you have one service (.svc) for?
Is there a method of grouping the entities? For example, instead of having a transfer service, you could put send and receieve money in account service because you receive and send out of accounts.
This also brings another question in regards to methods and parameters. If I wanted to add an edit client, would one normally add a method like EditClient(Client client, Client newClient), and replace the whole client with another client object? Or would you add separate methods for editing a client for example: EditName(Client client, string name) under Client service?
I'd like to properly lay out where the operationcontracts will fit in my web services.
I would suggest reading the Pattern of Enterprise Architecture by Martin Fowler. This should give lots of organisation tactics for service oriented software.
I am running into a problem to validate a model which refers to other models.
I have an 'user' model and a 'profile' model. I want to create the profile but I need to check if the user actually exists.
In my profile model I have a method 'validateUser' but I would either have to write a query to a specific table within a database or I have to create a user model object and call exists(id).
Both options seem to have a lot of drawbacks. The table name could change I would have to go over all models that use it. With the user mode object I would have to create an object within the profile model or inject it.
What is the best way to approach this?
In real world situation the validation is rarely a straightforward process. Instead you have several unrelated concerns:
If you are using active record for representing your domain model (don't confuse with M in MVC), then all these three aspect become a responsibility of single object.
Which is what you have now.
The best option is to separate all these responsibilities (all hail SRP). Basically what you do is divide your current setup in thee different groups of structures:
domain objects: for dealing with specific rules of domain entity
data mappers: for storage abstraction
services: for interaction between domain objects and mappers (or other domain objects)
Since your question was somewhat confusing (the were users and profiles and saving and validation and then something wasn't there), I am not sure if I understood it correctly, but here is a small example:
public function createProfile( $id, $email, $name ) { $account = new Account; $account->setId( $id ); $accountMapper = new AccountMapper( $pdo ); // explained below if ( $accountMapper->fetch( $account ) === false ) { $this->errors[] = .. something about missing account return; } $profile = new Profile; $profile->setEmail( $email ) ->setName( $name ); if ( $profile->isValid() === false ) { $this->errors[] = .. something about invalid profile return; } try { $profileMapper = new ProfileMapper( $pdo ); // explained below $profileMapper->store( $profile ); } catch ( Exception $e ) { $this->errrors[] = .. something about failing to create profile return; } $account->addProfile( $profile ); $accountMapper->store( $account ); }
This is extremely simplified example. Especially where mappers are initialized, because that part in real-world situation would be handles by some factory. Kinda like described in this post.
The point here is that the validation of domain data and and insurance of DB integrity is done separately. The underlaying APIs for interaction with database actually return you error code is you violate
UNIQUE KEY or
FOREIGN KEY or any other constraint, which you can then use to determine what went wrong.
The method itself would be part of a service (in this case - some service that manages user accounts).
Note: if your application needs to perform multiple SQL interactions per operation and those interaction need to be done as transaction with ability to do rollback, then, instead of using data mappers directly, you should be looking at implementing Units of Work. To learn about UoW, you will have to read Patterns of Enterprise Application Architecture, because it's really extensive subject.
I can't get this to work.
<?php function __autoload($classname){ include 'inc/classes/' . $classname . '.class.php'; } __autoload("queries") $travel = new queries(); echo $travel->getPar("price"); ?>
And this is the inc/classes/queries.class.php file.
<? class queries { function getPar($par, $table='travel', $type='select') { $result = $db->query(" $type * FROM $table WHERE $par LIKE "); while ($row = $result->fetch_assoc()) { return " $row[$par] "; } } } ?>
It returns "Class 'queries' not found". What's wrong with it?
EDIT:
Fatal error: Cannot redeclare __autoload() (previously declared in /index.php:5) in /index.php on line 5
What the hell? I can't redeclare a function that is already declared in its own line, why?
Instead of that dreadful abomination, you should learn how to utilize
spl_autoload_register():
spl_autoload_register( function( $classname ){ $filename = 'inc/classes/' . $classname . '.class.php'; if ( !file_exists( $filename) ){ throw new Exception("Could not load class '$classname'.". "File '$filename' was not found !"); } require $filename; });
And you should register the autoloader in your
index.php or
bootstrap.php file, and do it only once per loader (this ability lets you define multiple loaders, but that's used, when you have third party library, which has own autolaoder .. like in case of SwiftMailer).
P.S. please learn to use prepared statements with MySQLi or PDO.
Update
Since you are just now learning OOP, here are few things, which you might find useful:
Lectures:
Books:
I found an interesting discussion here:
Quote:
DataSets are sub-par amateur solutions to code your Data layer...Stop Using Them and learn to CODE! :)
What is your opinion of DataSets vs. just stubbing out custom classes and working with those? What are other alternatives?
Snobbery aside, DataSets can be useful in applications with relatively straightforward business logic and where the developer has some control over the database schema, such that data tables match 1:1 business objects (which in this case are usually comprised of a DataRow).
Martin Fowler discusses this very clearly in Patterns of Enterprise Application Architecture (Amazon link). DataSets (Table Module in Fowler's nomenclature) are a good match to a Transaction Script, whereas a Domain Model requires more well defined classes and a mapper to the database (because a 1:1 correlation between business objects and tables is typically not achievable in this situation).
DataSets/Table Module have a lot of limitations, but they have advantages in simplicity, especially if you are working in .NET.
A pragmatic programmer will assess the requirements of a given application and apply patterns and technologies that fit best, even if they aren't the sexiest. For simple scenarios with straightforward 1:1 mapping to relational data, DataSets don't suck. Often, though, they do, and any programmer that relies on them exclusively is in for trouble in more complicated scenarios.
I have 2 tables, for now as an example, ClientOrder and Products Table. Using Linq, I have been able to write all the queries that I want to run 1. Search by Client order 2. Search by Client name 3. Search by Product name 4. Search by Product ID
I want to create methods for each of the above queries. The ? is, what pattern is appropriate here? Factory Pattern does not seem to fit the bill as I know that each of my object will be using the same data context.
Is it wiser to just create a static class with the 4 static methods?
Note: I am 5 months in with the programming world and a newbie
I found Martin Fowler's Patterns of Enterprise Application Architecture helpful in learning how people structure access to database tables. Some of these patterns are listed here.
For your simpler task, a single class with four static methods sounds perfectly reasonable. But you should consider Fowler's Table Data Gateway pattern, where you package all the access to each table in its own class of static methods (and use a standard naming convention).
I have been following the modified Model View Controller example presented in this article by Oracle. Java SE Application Design with MVC
In this example the DefaultController class declares property name strings like this:
public static final String ELEMENT_TEXT_PROPERTY = "Text";
These strings are used in multiple places.
In the controller:
AbstractController uses reflection to search its models for a method named set+propertyName.
In the Model:
The model has setter methods which call this method:
firePropertyChange(Controller.NAME_PROPERTY, oldName, name);
In the View:
In the modelPropertyChange method,
if(evt.getPropertyName().equals(Controller.NAME_PROPERTY)
The part that concerns me is that the model is referencing the controller's static fields. I am still trying to master MVC and I am unsure if this architecture achieves the desired decoupling that MVC is used for. Does it matter that the all three classes reference the controller's static fields?
EDIT: Is the architecture described in this tutorial, as coded, an invalid representation of MVC? If so, can it be altered so that the model / (model layer) is not dependent on the static property names defined in the DefaultController class?
I realize this article is "just a tutorial", but if the code presented in it does not reflect the claimed decoupling, I think that the community should stop referencing it as an example of MVC.
I am not Java (or even desktop application) developer, so take this all with grain of salt.
First of all,
static fields in any code represent a global state, which is completely against any practices in object oriented code.
As for your worries, that the code in tutorial violates SoC, you are right. But then again , it is intended as a basic tutorial, and not something too advanced. The goal is to show the parts of MVC triad and give at least surface level comprehension of how they interact and what are the main responsibilities. It is not meant to be copy-pasted in production code.
Once clear sign for it would be, that there exists a "model object" in this code. In a real world situation model would be a layer.
If you want to gain more extensive understanding of MVC, i can recommend three reading materials:
I have been having a difficult time understanding how to separate my program into a n-tier application with 3 layers (BLL,DAL,UI). I have so much confusion on the subject that I don't know where to begin. I have looked at videos, articles, and application examples, but I find it hard to comprehend. There seems to be a lack of consistency from one example to the next. Is there any resource I can look at that is extremely thorough on the subject?
For reference, I am a C# .NET entry level developer, is this too big of a topic to tackle right now? I understand the concept completely, however I do not know how to implement it well.
You should read books like
Martin Fowler - Patterns of Enterprise Application Architecture
Dino Esposito - Microsoft® .NET: Architecting Applications for the Enterprise
But they can be too difficult if you are C# entry level developer (and not senior Java developer same time :))
Some overview and basic understanding you can get here or search some short and easy articles about this subject.
I have to design a school web ERP for a client of mine.
The following modules need to be present as of now:
Student
Inventory
Cafeteria
Admissions
Please help me with the required links or book which can guide me with the steps that need to be followed to create the design. Difference between different designs that can be used for e.g. in one of the links i read that Component based design should be used.
Also, Please note that the modules need to be licensed. i.e. The user must be able to choose one or more modules of his interest and only those modules should be installed.
I am new to this area and this exercise is primarily for learning process so I WANT TO DO THIS FROM SCRATCH AND NOT USE THE EXISTING OPEN SOURCE AT THIS POINT OF TIME.
Please help me with the required links / books / papers etc...
This question is way too broad to be completely answered with a single post. But if you're looking for a book to explore architectural approaches I can highly recommend the following, which in my opinion is a must-read for every developer anyway:
Patterns of Enterprise Architecture by Martin Fowler'm building some application, the involves training programs. my issue is like this,
A workout could be simple as this:
3 sets of 45 push ups.
so I would just create 2 fields, sets / count
BUT a workout could be also:
45 minutes run, 3 sets of 45 pushups, 2 minutes of rope jumping, 150 meter swimming.
so i need to build one table, that would know to store the data as it changes it structure, and later I could still translate it to real data on the gui.
how can i make it efficiently and wisely ?
edit:
To make it a bit clear, i want to specify to each workout what Ive done in it. so one workout could be : 3 sets, first: 45 push ups second: 32 push ups third: 30 push ups
and another workout could be: 3 sets of pushups: first: 45 push ups second:32 push ups third: 30 push ups and also 2 minutes of jumping rope 150 meter swimming
the data isn't consistence, one set could be a number of push ups, the next could be a time length etc..
You may want to consider a database schema such as the following:
CREATE TABLE workouts ( workout_id int, user_id int, PRIMARY KEY (workout_id) ) ENGINE=INNODB; CREATE TABLE sessions_pushups ( started datetime, workout_id int, number int, PRIMARY KEY (started, workout_id), FOREIGN KEY (workout_id) REFERENCES workouts (workout_id) ) ENGINE=INNODB; CREATE TABLE sessions_rope_jumping ( started datetime, workout_id int, duration_minutes int, PRIMARY KEY (started, workout_id), FOREIGN KEY (workout_id) REFERENCES workouts (workout_id) ) ENGINE=INNODB; CREATE TABLE sessions_swimming ( started datetime, workout_id int, meters int, PRIMARY KEY (started, workout_id), FOREIGN KEY (workout_id) REFERENCES workouts (workout_id) ) ENGINE=INNODB;
This allows you to have complex workouts that do not follow the schema of previous workouts. You could have something like this very easily:
CREATE TABLE sessions_triathlon ( started datetime, workout_id int, swimming_meters int, cycling_meters int, running_meters int, duration_minutes int, PRIMARY KEY (started, workout_id), FOREIGN KEY (workout_id) REFERENCES workouts (workout_id) ) ENGINE=INNODB;
Martin Fowler calls the above model "Concrete Table Inheritance" in his Patterns of Enterprise Application Architecture book. Bill Karwin also describes this model in his SQL Antipattens book, in the Entity-Attribute-Value chapter. He also describes the disadvantages in choosing an EAV model to tackle such a scenario.
On the other hand, if you want total schema flexibility, you could consider other NoSQL solutions instead of MySQL. These data stores do not not normally require fixed table schemas.
I'm looking book/online resources about Java EE application deployment pattens. Especially I want to know patterns when to use local interfaces when remote ones (how many nodes I need to?). The key question is to use single EAR (EJB and WAR) or separate nodes for EJB and WARs.
Resources I've found are bitty outdated, they concentrate on EJB2 or design patterns. I'm interested into new technologies like EJB3, Spring or Seam.
I'm studying business layer and need a complete reference which covers issues about "how to manage dependency between business layer and other layers", "how many ways are there to send data between layers" and most important for me "how to group business logic and make business component and talk about possible ways....".
do you know any reference?
EDIT: I would be delighted if you introduce some e-book for it.
Thank you'm writing a simple java application (in study purpose) to manage employees and I need an advise: how to store and retrieve data from the database.
Code, that I have wrote so far is too big to put it here, so, in two words:
I have next hierarchy: abstract class Employee: 4 attributes, getters, setters class Salaried: 2 new attribute class Hourly : 2 new attributes class Director: 3 new attributes class Manager : 1 new attribute
I have a MySQL data with 1 table (create script):
CREATE TABLE `employee` ( `SSN` int(9) NOT NULL PRIMARY KEY, `FirstName` varchar(20) NOT NULL, `LastName` varchar(20) NOT NULL, `Department` varchar(20) NOT NULL, `Salary` float(10) NULL, `OvertimeHours` float(10) NULL, `HourlyWage` float(10) NULL, `NumberHours` float(10) NULL, `Organization` varchar(30) NULL, `Bonus` float(10) NULL );
First 4 fields are general for all employees.
Salary and OvertimeHours are attributes of the Salaried class
HourlyWage and NumberHours are attributes of the Hourly class
Salary, Bonus and Organization are attributes of the Director class
Salary also is a attribute of the Manager class
I've created a static class Database to work with the MySQL.
import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.Connection; public abstract class Database { // constants private static final String DRIVER = "com.mysql.jdbc.Driver"; private static final String DBNAME = "records"; private static final String DBUSER = "root"; private static final String DBPASS = ""; private static final String CONURL = "jdbc:mysql://localhost/" + DBNAME; // class attributes private static Connection connection = null; public static boolean fillEmployee(Employee emp, int ssn) { try { PreparedStatement stm = connection.prepareStatement( "SELECT FirstName, LastName, Department " + "FROM employee " + "WHERE SSN = ?" ); stm.setInt(1, ssn); ResultSet rs = stm.executeQuery(); if(!rs.next()) return false; emp.setSocialSecurity(ssn); emp.setFirstName(rs.getString("FirstName")); emp.setLastName(rs.getString("LastName")); emp.setDepartment(rs.getString("Department")); stm.close(); rs.close(); } catch (Exception e) { System.out.println(e.getMessage()); System.exit(0); } return true; } public static boolean deleteEmployee(int ssn){ try { PreparedStatement stm = connection.prepareStatement( "DELETE " + "FROM employee " + "WHERE SSN = ?" ); stm.setInt(1, ssn); return (stm.executeUpdate() == 1); } catch (Exception e) { System.out.println(e.getMessage()); System.exit(0); } return false; } // class methods public static Salaried getSalariedEmployee(int ssn){ Salaried employee = new Salaried(); try { if(!fillEmployee(employee, ssn)) return null; PreparedStatement stm = connection.prepareStatement( "SELECT Salary, OvertimeHours " + "FROM employee " + "WHERE SSN = ?" ); stm.setInt(1, ssn); ResultSet rs = stm.executeQuery(); employee.setSalary(rs.getFloat("Salary")); employee.setOvertimeHours(rs.getFloat("OvertimeHours")); stm.close(); rs.close(); } catch (Exception e) { System.out.println(e.getLocalizedMessage()); System.exit(0); } return employee; } public static void createConnection() { if (connection != null) return; try { Class.forName(DRIVER); connection = DriverManager.getConnection(CONURL, DBUSER, DBPASS); } catch (Exception e) { System.out.println(e.getMessage()); System.exit(0); } } public static void closeConnection(){ if (connection == null) return; try{ connection.close(); connection = null; } catch (Exception e) { System.out.println(e.getMessage()); System.exit(0); } } }
What are you thinking about getSalariedEmployee and fillEmployee methods? How can I improve the overall design and architecture of my application?
Perhaps you should start with a good reading of the book Patterns of Enterprise Architecture. It has a good chapter covering the different ways in which we typically deal with the database.
You can read quick definitions of these in the companion web site:
All the patterns have advantages and disadvantages and some of them have entire frameworks that help you write code for them. was programming for a while. Starting at school, writing small utility programs as a hobby and now professionally. My problem is I'm getting distracted while writing software (I come up with a new feature and fallow it immediately) so my code is usually disorganized. Now when I’m starting my career as a professional developer I find out that even though my software works pretty well the code doesn’t look pretty. I often find myself creating to many or to little classes – sometimes it just doesn’t feel right. Overall I’m losing precious time when I could earn money doing another project.
I’m looking for a book that will teach me how to design software structure without juggling the code in the middle of the creation process.
If you're looking for Design Patterns, there are two authoritative books to look at:
Head First Design Patterns
Design Patterns: Elements of Reusable Object-Oriented Software
if you are doing enterprise softare:
I would like to know how I can go about implementing my own version of a MVC framework in ASP.NET? I know there already is Microsoft provided ASP.NET MVC framework, but I want to learn MVC and thought the best way would be to implement my own flavor of a MVC framework on top of ASP.NET. Any thoughts / guidance ?
Also, can anyone point me to a page where I can learn more about how microsoft implemented ASP.NET MVC ? I'm more interested in learning about the under the hood plumbing that goes on to implement the framework on top of asp.net, do they use HttpHandlers / HttpModules ?
Thanks.
MVC is an architectural pattern that is not dependent on ASP.NET or any framework.
I would not advise trying to implement it on top of ASP.NET if you are just looking to learn about the Model View Controller pattern. ASP.NET will impose too many implementation details when you should instead be concentrating on the overall concept such as separation of concerns, single responsibility.
The ASP.NET MVC framework is simply an implementation of the Model View Controller pattern that runs on top of ASP.NET. Like most implementations it contains variations on the basic MVC pattern to better suit web applications and the underlying framework. So trying to re-implement ASP.NET MVC will give you a non-standard understanding of the pattern.
Martin Fowler explains the fundamental ideas of MVC in the book: Patterns of Enterprise Application Architecture.:
Rather a simple question. But the implications are vast.
Over the last few weeks I've been reading a lot of material about n-tier architecture and it's implementation in the .NET world. The problem is I couldn't find a relevant sample for Winforms with Linq (linq is the way to go for BLL right?).
How did you guys manage to grasp the n-tier concept? Books, articles, relevant samples etc.
UPDATE: I kind of wanted a sample app not just theory. I like to get into the specific implementation and then iterate the principles myself.
It isn't technology specific but this is a very good book about n-tier architecture: Patterns of Enterprise Application Architecture.
The subject line says it all.
Googling DAL & DAO returns only C# .NET related results.
Is there a DAL/DAO equivalent pattern in the Java world? Are there any reference implementations available?
Of course it applies to Java as well: Don't Repeat The DAO!
Have a look at Fowler's Patterns of Enterprise Application Architecture. Core J2EE Patterns also refers to DAO.
It'd be interesting to check the dates on your C#/.NET references. I'd bet that the idea started on the Java side and was adopted later by .NET. Microsoft probably had another persistence technology that was their "best practice". If I recall correctly, VB used to tie UI elements closely to columns, without an intermediate layer in-between to separate the view from the database..
Which is the best tutorial to understand java design patterns? I am not new to java programming, so basics of java need not be required.
you may find these useful :
Design Patterns: Elements of Reusable Object-Oriented Software
Head First Design Patterns
Pattern Hatching: Design Patterns Applied
Patterns of Enterprise Application Architecture
(but the last two are a bit advanced)
Joshua bloch's book Effective Java is useful even thought its not about design patterns but it's a must read.
You must read
Head First Design Pattern.
Though not specific to Java, I seriously recommend you to go for,
Design Patterns by Gamma, et al (a.k.a. the Gang of Four, or simply, GoF)
If you are not new to programming you have probably already used a lot of them without realizing it. Design patterns are not language specific, try not to think of them as 'Java' patterns or 'Python' patterns etc. A really good book is Design Patterns: Elements of Reusable Object-Oriented Software.
I am a college student and I'm learning about software patterns (specifically the ones mentioned in the GoF book). I have never been that good at software design, but I'm learning how each pattern works and solves different problems to produce more flexible software. I have a question that has been bugging me.
What's the best way to look at a problem (a piece of software that needs to be written) and determine how to organize it and apply patterns? I recently had a group Java project that went really sour because our design just wasn't very flexible. I honestly just had a great deal of trouble trying to break down the problem into manageable pieces to sort out. I know I can write code, but I don't know how to organize it.
The patterns I have gone through currently are Composite, Builder, Adapter, Proxy, Observer, State, Strategy, Template Method, and Iterator. Like I have mentioned, I know what they are supposed to solve, but I feel like I am trying to force patterns into place. Any help or links to pages would be greatly appreciated. Thank you all!
Patterns are not only a programming tool but also a communication one as well. When there are tens or hundreds of thousands lines of code you need to be able to find your way round, visualise parts and be able to talk to fellow developers about the code effectively. Patterns do this.
The GOF patterns are a good starting point to learn about them but you need to apply them to a situation, not create a situation to use them.
When you get better at applying patterns to the right situation start expanding your knowledge on enterprise patterns and start to see how patterns can be connected together. The latter will start to emerge the pattern of application architecture.
Trust me it can take a long time and experience to get good at them and knowing when to apply them. Just be patient and be curious at the same time..
I am trying to get my head around the common patterns for database abstraction.
So far I've found:
Please don't worry too much about my quick explanations of the patterns. I am still in an understanding phase.
But is this list complete or are there other concepts which are missing here?
Martin Fowler's "Patterns of Enterprise Application Architecture" is an excellent book, well respected in the community, which documents about fifty design patterns, around half of which are concerned with interacting with databases. It includes Repository, several kinds of DAOs (more or less covering your Database Layer and DAO) and several entire categories of patterns found in object-relational mappers. So there's a good place to start.
It's hard to summarize any more of the content of POEAA in this answer without simply repeating the list of patterns. Fortunately the list can be found at Fowler's web site. Unfortunately the copyright symbol there suggests that I shouldn't just include it here.
I like Active Record but many say it's bad in performance compared to Hibernate. There should be some good article out there but google can't help me.
If what you want is to compare Active Record (Rails) and Data Mapper (Hibernate), the 'Data Source Architectural Patterns' chapter from Patterns of Enterprise Application Architecture is a good place to start.
It explains clearly the concepts behind these patterns and when to use each one, but doesn't discuss specific implementations like Rails or Hibernate..net-3.5abstractionactiverecordalgorithmandroidanemic-domain-modelapiapplication-planningarchitectural-patternsarchitectureasp.netasp.net-mvcasp.net-mvc-3attributesautoloaderazureazure-storage-queuesbreezebreeze-sharpbusiness-layerbusiness-logicbusiness-logic-layerbusiness-objectscc#c++cachingclassclass-designclass-table-inheritancecode-organizationcode-reviewcodeignitercoldfusioncoldfusion-9comparisoncomponentscontent-management-systemcontrollercqrscslacurrencydaodata-access-layerdata-structuresdatabasedatabase-designdatabase-schemadatamapperdatasetdecouplingdelphidependenciesdependency-injectiondeploymentdesigndesign-patternsdomain-driven-designdomain-objectdtodynamice-commerceejb-3.0encapsulationenterpriseentityentity-attribute-valueentity-frameworkerpesbfinancefloating-pointformsframeworksfunctiongoogle-cloud-datastorehibernatehtmlinheritanceinsertinterfacejavajava-eejava-ee-5javascriptjpalanguage-agnosticlayerlayoutldaplegacylegacy-databaselinuxmessagemethodologymiddlewaremodelmodel-view-controllermodelingmodularitymoneymsmqmulti-tiermvvmmysqln-tiern-tier-architecturenamingnaming-conventionsnhibernatenumbersobjectooadoopopen-sourceormpersistencephppoeaapojoprocessprojectspropertiesprototype-patternproviderpthreadspublish-subscribepythonrdbmsrecordsrecurring-billingrefactoringrelational-databaserepository-patternrequirementsresourcesrestriaroundingrubyruby-on-railssampleseparation-of-concernsserviceservicebussoasoftware-designsoftware-engineeringspark-view-enginespringspring-dataspring-data-jpasqlsql-serverstaticstored-proceduressubscriptionsymfony1symfony2t4tddtemplatesterminologytreeumlunit-of-workunits-of-measurementuser-interfaceuser-managementvb.netvb6viewweb-applicationsweb-serviceswebformswebsitewindows-azure-storagewinformswpfzend-framework | http://www.dev-books.com/book/book?isbn=0321127420&name=Patterns-of-Enterprise-Application-Architecture | CC-MAIN-2019-09 | refinedweb | 44,483 | 61.77 |
Does any one think things are going over the top in terms of adding new OOP functionality in PHP?
When PHP started with OOP it was great, but now there are to many new features. The features i'm talking about are static methods, late static bindings etc..
I can see where autoloading and namespaces are useful, but some of the latest stuff seems like they are being lazy and instead of trying to work out a solution using true OOP they are just adding new features to the core.
Are the PHP developers getting the feature ideas from other languages like JAVA or are they just making new things up.
Whats wrong with just having basic OOP? | http://www.sitepoint.com/forums/printthread.php?t=573113&pp=25&page=1 | CC-MAIN-2013-48 | refinedweb | 118 | 77.06 |
In our last tutorial of Zend Framework we learnt to create static pages in Zend Framework, today in this tutorial we will be learning the concept Autoloading in Zend Framework tutorial.
The basic formula to load any class or file into another class or file in PHP is to use include() or require() directives. But if we need the class file in whole application, these include() or require() statements will be repeated in every class or file wherever the desired file is required. To avoid this, the concept of autoloading came into existence.
- What is Autoloading?
- Autoloading is a mechanism which eliminates the need to manually include the file.
- Once an autoloader has been defined, it is automatically called when you are trying to use any class or interface.
- Using autoloader there is no need to worry about where the class being autoloaded is located in the project, because the autoloader defined will perform the file lookup for us.
- Zend Framework Autoloading
- In zend framework, some naming conventions are followed for the class names. These naming conventions setup a relationship between the class names and class files.
- Here the class name has 1:1 relationship with the file system.
- The underscore character(“_”) in the class name is replaced by the directory separator (“/”) to resolve the path to the file and a .php extension is put at the end as a suffix. For example, if the class name is Animal_Aquatic_Fish, it will be resolved to its file as Animal/Aquatic/Fish.php.
- In zend we use a common class prefix say Zend_ as a namespace prefix to prevent naming collisions.
- By default Zend framework uses Zend_Loader_Autoloader as its autoloader.
- In simple cases we just require the class and then instantiate it. but as Zend_Loader_Autoloader is singleton, we use getInstance() method to retrieve the class instance as shown below:
- By default, this will allow loading any classes with the class namespace prefix Zend_ or ZendX_, as long as they are on your include_path.
- But if we have classes with class namespace prefix of something other than Zend_ or ZendX_, they need to be registered using the registerNmaespace() method as shown below:
- An array of namespaces can also be loaded as shown below:
- The Zend_Loader uses a static method loadFile() to load a PHP file dynamically. This method is a wrapper class for the PHP function include() which loads the file.
PHP
PHP
PHP
Thus we had an overview of the autoloading in this Autoloading in Zend Framework tutorial. | https://blog.eduonix.com/web-programming-tutorials/autoloading-zend-framework/ | CC-MAIN-2019-22 | refinedweb | 417 | 60.04 |
Hi Guys ,
I am a newbie to rails so forgive me if this has been
answered before. I am currently doing an inventory management project
on RoR and got 3 tables , goods , customers and staff. I used scaffold
to generate all of them.
I just generated a new scaffold called search and I
need to search values from all 3 tables. i am using the steps from
this site ,
The only change is from Person/people to Book/books .But I keep
getting this error
You have a nil object when you didn’t expect it!
You might have expected an instance of Array.
The error occurred while evaluating nil.each
Can someone help? The places where I put the codes are below ,
def search
query = params[:q]
@people = Person.find(:all, :conditions => [“name = ?”, query])
end
is in search_controller.rb
the codes for view are in view/search/index.rhtml and search.rhtml.
Thank you so much!!
regards
billy | https://www.ruby-forum.com/t/newbie-having-problem-with-search-function/89610 | CC-MAIN-2022-27 | refinedweb | 159 | 78.45 |
SemEvStart associates a event,a semaphore and a task.It notification the task about the semaphore by triggering the event.This notification event
gets tiggered based on the option specifed during the SemEvStart .
The event, thus triggered can be caught using the eventReceive().
The user events supported by vxworks are VXEV01 to VXEV24.
Here goes the sample code ...
#include "eventLib.h"
SemEvStart(semID ,VXEVnn,options);
semId : Id returned by the semcreate()
VXEVnn : VXEV01 to VXEV24
options :
EVENTS_SEND_ONCE (0x1) :tells the semaphore to send the events one time only.
- Specify if another task can register itself while the current task is still registered. If so, the current task registration is overwritten without any warning. The option
EVENTS_ALLOW_OVERWRITE (0x2) : allows subsequent registrations to overwrite the current one.
- Specify if events are to be sent at the time of the registration in the case the semaphore is free. The option
EVENTS_SEND_IF_FREE (0x4) : tells the registration process to send events if the semaphore is free.
EVENTS_OPTIONS_NONE : If none of these options are to be used
Thanks,
RChandran | http://fixunix.com/vxworks/326997-sample-code-neeeded-semevstart.html | CC-MAIN-2015-18 | refinedweb | 173 | 58.48 |
.
from brian2 import * from brian2tools import brian_plot %matplotlib notebook prefs.codegen.target = 'numpy'
The following example is a leaky integrate-and-fire neuron with a constant current input. As soon as the membrane potential crosses the threshold of -50mV, a spike is emitted and the membrane potential reset to -70mV.) run(100*ms) fig, ax = plt.subplots() brian_plot(mon, axes=ax) ax.axhline(-50, linestyle=':');
If you zoom into the plot above, you see that the membrane potential never seems to cross the threshold! We can also see this by analyzing the recorded membrane potential values:
mon.v[0].max()
The reason for this becomes clear when we look into Brian's scheduling in more detail. Brian comes with a useful function
scheduling_summary, that displays the scheduling information for the current network:
scheduling_summary()
As you can see above, the first thing that gets executed during a time step is the
StateMonitor, followed by the state update step (the numerical integration of the differential equations), the threshold check, and finally the reset. Now the previous observation makes sense. In a time step where the threshold is crossed, the following things happen:
- The membrane potential gets recorded (it is still below the threshold)
- The state update step updates the membrane potential, it is now above the threshold
- The thresholder compares the membrane potential to the threshold and signals a spike
- The resetter resets the membrane potential
The
StateMonitor therefore never records a membrane potential that is above the threshold.
How is the order of operations determined? Each object has a
when and and
order attribute. The basic execution slot is defined by the
when attribute, the
order attribute is only used when there is more than one object in the same slot.
The slots and their order are given in the
schedule attribute of the
Network object (here we use
magic_network, because we haven't constructed a
Network object ourselves):
magic_network.schedule
['start', 'groups', 'thresholds', 'synapses', 'resets', 'end']
In addition to the listed slots, there are
before_... and
after_... for each of the names, just before and after the corresponding slots. If we are interested in recording the membrane potential before the threshold is checked, we can therefore use
before_thresholds:, when='before_thresholds') # <-- change here run(100*ms) fig, ax = plt.subplots() brian_plot(mon, axes=ax) ax.axhline(-50, linestyle=':');
Now, the membrane potential that gets recorded by the
StateMonitor does indeed cross the threshold, as you can confirm by zooming into the above plot or by checking the recorded values:
mon.v[0].max()
We can easily verify that this change is due to the change in scheduling:
scheduling_summary()
As you can see above, the
StateMonitor now records its values after the state update step and no longer before. | https://briansimulator.org/posts/2020/video-scheduling-1/ | CC-MAIN-2020-40 | refinedweb | 457 | 50.77 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.