Clarion Object Based Driver Kit Documentation

Compatibility - Reliability - Functionality - Performance - Security

Version 1.00 - April 25 2025

Index Files Views Performance Debugging History

Contents

General Information
Check List
Files
Views
Filters
Performance Tips
Debugging
Custom Field Types
Writing a Driver
DOS2 Driver
SQLite2 Driver
InMemory2 Driver
Release History

General Information

Introduction

In the beginning was Clarion Professional Developer, which ran on DOS, and introduced the idea of data-access structures (FILEs) and commands built into the fundamentals of the language. The database in question was simply known as DAT, and was an on-disk shared-file-system (a technique commonly known as ISAM). This was followed by Clarion Database Developer which extended the idea to provide a common set of language commands, but which via a Database Driver, could access different databases (like Btrieve or Clipper) in addition to the DAT system (now known simply as the Clarion driver.)

Clarion for Windows ran with this approach and introduced many more drivers, including a new ISAM driver (Topspeed) and a number of drivers for SQL databases and ODBC. It also introduced a new program structure, the VIEW which would allow data access across multiple tables at the same time. In later versions a property syntax, table{prop:whatever}, would emerge to allow for more interactivity with the driver itself, and allow for more generic code.

The addition of SQL drivers to the mix presented a challenge to the language. On the one hand the supreme goal was standard driver behavior, regardless of the back end. While some concessions have been made to SQL drivers over the years, the primary functionality is dictated by the abilities of the ISAM drivers. For anything more than basic SQL commands a {prop:Sql} interface was used to pass raw SQL directly to the back end.

This new generation of driver code was born out of the need to update the driver interface to be SQL-First. In other words ISAM functionality is useful, and the possibility for new ISAM drivers exist, but the primary goal of this code is to make more, and better, SQL drivers. It is a fundamental premise that the new drivers will likely be SQL based. Thus non-SQL drivers will need additional documentation to note where they diverge from this assumption.

While it is outside the scope of this document to "teach SQL", SQL drivers built with these classes introduce a number of new settings and properties which are unique to SQL. These will be discussed here, however for more information on correctly using these settings it's worth reading this in conjunction with some external information on SQL in general. (AI engines like ChatGPT or CoPilot do a good job explaining SQL commands and how they might be useful to you.)

Regardless on the emphasis on moving forwards and embracing new concepts, backwards compatibility is a big priority. A new driver is largely worthless if it can't replace an old driver with minimal effort. Clarion programs typically consist of a very large existing code base, and being able to plug into that base is a priority. With a very few exceptions every effort has been expended to keep compatibility with existing code. Divergences from traditional driver behavior are noted here.

These classes are written in Clarion itself, and thus is more accessible to Clarion programmers. The drivers are coded using Clarion code, and so can be easily modified, extended and adapted. This allows developers to fine-tune the drivers to their needs. The example drivers are shipped as source code, and allowance is made for shipping new file drivers as source code as well.

Five Reasons Why

This is a long document with lots of information discussing why these drivers make your program better. But if you are easily distracted, here are five reasons which may convince you they're worth your time.
  1. SQL First; No SQL Cursors. No SQL Disconnect hanging. No DROP on CREATE. CREATE works giving you full control, including SQL field type, server-side default values, server-side validation and relationships. Prop:Sql replacement prevents SQL Injection attacks
  2. Easier conversion from TPS; Arrays, Memos and Exclude-Null-keys supported.
  3. Native drivers for PostgreSQL, MS SQL Server, Firebird, MySQL, MariaDB, SQLite.
  4. Class based, written in Clarion, shipped with source code. Portable. Extensible. Fixable. Reliable.
  5. Lots of new features. Over 30 new Commands. Features like Full Text Search, Subsets and so much more. Improve performance by using smarter code.
There are at least one hundred more reasons below. Keep reading.

Five Reasons Why Not

Although a huge amount of work has already been done on the drivers, some work is still outstanding. This list covers some reasons why you may prefer to continue to use traditional drivers. Hopefully this list will be reduced as work on the drivers continues.
  1. IDE Integration's; Currently the driver offers very limited integration in the Data Directory. File Structure Imports (from the database), In-Dictionary table browsing, and a GUI window for setting Driver Options are all unavailable at this time.
  2. Support for UNTHREADED FILE and VIEW structures; No thread-safety features currently exist for these structures, if they are global but do not have the THREAD attribute. Naturally you can continue to use traditional drivers for these structures, while at the same time using object based drivers for structures that do have the THREAD attribute.
  3. No work on integration with the Dynamic File Driver or IP Driver has been done at this stage, and it is unlikely that either would currently work with these drivers.
  4. This is all very new code, and there are bound to be a fair number of edge cases, corners and other incidental bits that are either not fully implemented, have issues, or do not behave in exactly the same way as the traditional drivers. In other words there may well be some teething problems.
  5. The object based driver code is built on top of the StringTheory commercial library. In order to get to the new functionality exposed in the drivers it is necessary to have a StringTheory license. While the base classes are free, StringTheory is not.

How To Read This Document

If you have been using Clarion for some time you may be fully familiar with how the file drivers work, and the functionality they offer. (Indeed this document assumes that you are.) Even if that is the case you are advised to read this document from start to finish. This will serve a dual purpose;

Firstly it will expose you to the new functionality provided by the object based drivers. There are new features in the new drivers that are well worth knowing about. There are also a few important behavior changes that you need to be aware of. In essence this whole document is about differences in behavior and feature set to the traditional drivers.

Secondly it will clarify, and correct, information you may have acquired over the years. A fair amount of "common knowledge" is obsolete, or was never true. Some best practices are no longer a good idea. And indeed some little-known features of the traditional drivers will be exposed. This document thus serves not just as a reference to the new drivers, but also as a good background to database interactions in general.

It's fair to say that there's a lot of information in here that you don't really need to know. It is included though for those who are curious and want to understand some of the underlying mechanics.

This document (mostly) does not repeat the information in the Clarion Help where it is unchanged. Where this document is silent, the current driver functionality should be assumed, and the current documentation considered as valid.

Clarion commands are highlighted as COMMAND, Clarion code is marked like this, and SQL code and commands are marked like this.

Copyright and License

The Clarion File Driver Classes are copyright (c) 2025 by CapeSoft Electronics.

This product is provided as-is. CapeSoft Software and CapeSoft Electronics (collectively trading as CapeSoft), their employees and dealers explicitly accept no liability for any loss or damages which may occur from using this package. Use of this package constitutes agreement with this license. This package is used entirely at your own risk.

Use of this product implies your acceptance of this, along with the recognition of the copyright stated above. In no way will CapeSoft , their employees or affiliates be liable in any way for any damages or business losses you may incur as a direct or indirect result of using this product.

For the full EULA see https://capesoft.com/eula.html

Base Classes

The base classes, DOS2, SQLite2 and InMemory2 drivers are released free of charge under the terms below. Other free or commercial drivers may be created using these classes, by anyone, provided that source code to the derived drivers is supplied so that the new driver can be recompiled.

This software is copyright by CapeSoft Electronics. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

This copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

This code is currently in Beta Development, and all information, including information in this documentation, is subject to change.

Derived Classes

Classes derived from these classes have their own copyright and license. The above license does not apply to them.

StringTheory

The File Driver classes make use of the StringTheory classes. The StringTheory classes are not covered by this license, and a license for StringTheory must be acquired in order to use, or derive, these classes.

Acknowledgement

This work builds on work done by Clarion Software, TopSpeed Development and SoftVelocity. Without the assistance and materials supplied by SoftVelocity it would not have been possible to make this package.

A personal thanks to Robert Zaunere who never failed to assist wherever possible.

Carl Barnes' Github repository had documentation on mangling rules which proved useful.

Terminology

  1. Throughout this document it's necessary to differentiate between drivers shipped with Clarion which are not built on this code and drivers shipped with Clarion, or supplied by 3rd parties, that are built using these classes.

    To simplify this distinction the former will be described as traditional drivers  and the latter as object based drivers.
  2. In the Clarion world our tables have Keys and Indexes. In the SQL world there are Keys and Indexes. Unfortunately the distinctions between them are not the same, and so it's hard to talk about Clarion and SQL in the same paragraph, and use these terms in meaningful ways. For this reason this document uses "keys" to include both keys and indexes in the Clarion sense, and to also include both keys and indexes in the SQL sense. Specifically, "keys" is deemed to include Keys and Indexes (as declared in the Clarion Dictionary) and to include Keys And Indexes (as declared in the SQL database.)

What's New?

The list of new features added to object based drivers is too long to list here. Indeed most of this document talks about new features or new functionality. So the list here is just a small hint of what is to come if you read further.

Folder Organization

Folder Contents
\clarion\bin The driver DLL's have to be located here. After compiling they are copied here.
\clarion\accessory\driver The Solution (SLN) to compile all the drivers (and the support DLL's).
The Projects (CWPROJ) to compile each driver - one project for DLL mode, one for Lib Mode.
The generic DLL's (CLAOBD and CLASTR)
Batch (BAT) files necessary for the pre, and post, build
The DLL and LIB files produced when building the drivers.
\clarion\accessory\driver\source The CLW, INC, INT and EXP files necessary for building the drivers, and support DLL's.
\clarion\accessory\driver\ObdExport A utility (including source code) for creating EXP files from INC files.
\clarion\accessory\driver\ObdPatch A utility (including source code) for patching the DLL's with a fixed address location.

Drivers shipped as Source

Object based drivers are shipped as source code. This allows new or custom drivers to be created, based on the existing drivers, and also allows for simpler debugging and bug reporting. The code is written in Clarion, and makes use of class syntax. It works with all Clarion programs including ABC, Legacy, NetTalk, Hand-code, and Custom template sets.

Declarations of new Commands, Properties and Errors are in CWDriver.Inc. This provides these new equates to older versions of Clarion. It's likely these may be included in Clarion in future versions. A related CWDriver.Clw (automatically included by the Inc file) takes care of the new commands.

If you wish to declare a Custom Field Interface then a custom field class implements the iDriverField interface, which is declared in CWDriver.Int . Examples of custom field types exist in DriverFields.Inc and DriverFields.Clw. These files are not compiled into any driver, they are compiled into your program.

StringTheory

StringTheory is a commercial String Manipulation Library, written in Clarion, and provided by CapeSoft. It does not fall under the license for this kit, and is supplied separately.

In order to compile the drivers a StringTheory license is required. At the time of writing the required version was 3.72 or later.

StringTheory is not required to be added to an application to use the new drivers, when using the normal, or new, driver commands and properties.

StringTheory is required to be in the application if the file or view object is accessed directly (as an object.)

If you are compiling an application in LIB mode, and if ANY object based driver is being used, then the APP MUST make use of the CLASTR.LIB as the source of the StringTheory object. This can be set on the StringTheory Global template, on the Multi-DLL tab. If there are multiple apps in the system this only has to be specified on the Root DLL.

Redirection File

The redirection file for your clarion environment should be adjusted to include;

*.inc = %ROOT%\Accessory\Driver\Source;
*.int = %ROOT%\Accessory\Driver\Source;
*.clw = %ROOT%\Accessory\Driver\Source;
*.exp = %ROOT%\Accessory\Driver\Source;
*.dll = %ROOT%\Accessory\Driver;
*.lib = %ROOT%\Accessory\Driver;


By default Clarion will generate LIB files into a sub directory of the solution. Worse, it generates them into a different sub directory depending on the build mode. To avoid this (which you really want to do) make sure the following line is in your RED file. See ClarionHub for more (and for more suggestions on optimizing the RED file.)

[debug]
*.lib = .
*.dll = .

[release]
*.lib = .
*.dll = .


Link: https://clarionhub.com/t/editing-default-redirection-red-file-to-create-lib-with-dll-in-project-folder/1646

Compiling The Drivers

File Drivers are a little different to normal Clarion accessories. With normal code the class can be included in a root DLL, exported from there, and it compiles when your system compiles. Changes in the code are simply applied when compiling. However because the File Drivers are used by the IDE (in the dictionary for example) they have to exist as binary files (DLL's) in the clarion\bin folder as well.This means they have to be compiled.

These drivers support multiple Clarion versions, from Clarion 8 and up [4]. That includes many separate builds of Clarion, and making a compiled DLL for each one in the driver kit install is not ideal. For this reason, after installing (or updating) your drivers, you will need to compile (or recompile) them.

It gets worse. Because the drivers make use of StringTheory, and because StringTheory is updated fairly regularly, you MAY need to recompile the drivers when StringTheory is updated. If you are building your program in LIB mode, then you MUST recompile the drivers when StringTheory updates.[1]

Fortunately, compiling the drivers is trivial;
  1. Open the IDE
  2. Go to the File Menu, to the Open Project or Application option
  3. Navigate to the \clarion\accessory\driver folder and open DriverKit.Sln [2]
    (Once opened, there will likely be nothing to see in the main part of the IDE window; opening the Solution Pad though shows you the solution that has been opened.)
  4. Go to the Build menu, then the Build Solution option.
This creates two support DLL's (CLASTR.DLL and CLAOBD.DLL) and also the 3 new drivers (CLALIT2.DLL, CLAMEM2.DLL and CLADOS2.DLL) as well as their LIBs, in both DLL mode and LIB mode. The DLL's are copied to the \clarion\bin folder (where they have to be for Driver registration [3]).

Drivers need to be registered after their first compile, but do not need to be re-registered with each subsequent compile.

Note 1: If you are compiling an application in LIB mode, and if ANY object based driver is being used, then the APP MUST make use of the CLASTR.LIB as the source of the StringTheory object. This can be set on the StringTheory Global template, on the Multi-DLL tab. If there are multiple apps in the system this only has to be specified on the Root DLL.

Note 2: If you have purchased commercial drivers based on the kit, then there may be other solutions (SLN files) in this folder as well. After compiling DriverKit.Sln you MUST compile those other solutions as well.

Note 3: The post-command batch files, also located in the \clarion\accessory\driver folder, assume a standard folder layout, and that the DLL files will be present in the current folder after compiling. If you are using a non-standard folder layout then you will need to adjust these BAT files appropriately.

Note 4: Not all features of the new driver are supported in all Clarion versions.

Registering File Drivers

In Clarion File drivers are registered in the File Driver Registry. Once registered they are available to be used in the dictionary. The driver must be a DLL file in order to be registered (in other words, it has to be compiled first.)

Drivers need to be registered after their first compile, but do not need to be re-registered with each subsequent compile.

To be registered the driver DLL has to be located in the \clarion\bin folder. It cannot be in \clarion\accessory\bin or elsewhere.

You can register multiple drivers at the same time.

Only one instance of the IDE should be open when registering a new driver. Once the new driver(s) are registered, close the IDE and restart it.

To register a new file driver
  1. Go to the Tools menu, Register File Drivers option. Then ADD.
  2. Select the new file driver DLL and click on Open.
    It's possible to select multiple Driver DLL files at the same time. (Using the Ctrl Left-Mouse-Click).

Template

A template file, DriverKit.TPL, is provided as part of the kit. This provides some template support for the new drivers so that they can easily be used in ABC and Clarion (Legacy) programs.

The template is not required for using the drivers, however access to the new Commands, Properties and Equates requires that either the template be included, or the new functionality is exposed manually (see below).

If the template was not registered by the installer then go to the Tools menu, Template Registry option, and Register the new template there. It only has to be done once, it does not need to be re-registered when updated.

Hand Code

Hand-coded projects can also make use of the new drivers, however since the template does not exist, the Include file and Driver must be added manually to the project;

Include('cwdriver.inc'),Once

This will automatically trigger the compiling of CWDRIVER.CLW as part of the project.

Add the drivers to the File Drivers list in the Project.

The following Conditional Compilation Symbols should be declared;

For DLL Mode projects;
DRVLM=>0;DRVDM=>1

For LIB mode projects
DRVLM=>0;DRVDM=>0

Note that even for Lib mode, the DRVLM setting is 0. This is because the drivers themselves are not compiled as part of the project. Drivers are compiled separately, and are included in your project in the usual way (ie in the DRIVERS section of your solution.)

When deploying, if in DLL mode, you will need to deploy the driver DLL(s), as well as CLAOBD.DLL and CLASTR.DLL.

Using New File Drivers

There are multiple levels of "using" the new drivers;
  1. Trivial: Use it just like any other driver. Register it, use it in your dictionary.
  2. Simple: Add the Activate Object Based Drivers Global extension to your application. This activates all the new commands and properties that the new drivers offer. If your program is not an APP, but hand-code, then see the instructions above.
  3. Simple: Make use of other templates which are designed to make use of new driver functionality.
  4. Intermediate: Make use of the new properties and commands to enhance your code with new functionality and better performance.
  5. Advanced: Create a class for a custom field type, and use it with a shipping driver
  6. Advanced: Derive, or write your own driver

Lib Mode

Clarion offers programs the ability to compile in two different ways. DLL mode creates a small EXE which then links to several DLL's (CLARUN.DLL et al) at run time. LIB mode links all[1] the libraries into the main EXE, thus resulting in a single EXE which can run without outside DLL's.

The drivers support both DLL mode and LIB mode, and when compiling the drivers all the necessary LIB and DLL files are created.

Note 1: This is only partly true as many programs load DLL's at run time, not compile time. These run time DLL's are not included in the EXE so most LIB programs require at least some DLL's.

Incompatibilities

In a small number of cases it is desirable to change existing behavior. This section explains these differences, and suggests ways to detect, and mitigate them. An incompatibility is defined as a situation where existing working code no longer has the same result as it did with a traditional driver when it is used correctly.

In some cases behavior changes in the sense that the driver supports more functionality. For example the traditional DOS driver supports files up to 2GB in size, and {prop:FileSize} returns erratic (and incorrect) numbers for files larger than 2 gigabytes. By contrast the object based, DOS2 driver supports larger files and so returns correct values.  These are not considered to be incompatibilities.

Incompatibilities (by design) are as follows;

Views cannot span Multiple Drivers

More accurately, tables included in the VIEW, which do not use the same driver as the primary table, are excluded for purposes of the VIEW's SELECT statement. This includes Memory tables.

When the primary table in a VIEW is a traditional driver, and multiple different drivers are in play, then the view engine moves all the necessary functionality to the client side. This can have a significant impact on performance. To avoid this the object based drivers simply ignore all tables in the view that are not the same driver as the primary table.

When the View Filter Fails

In traditional drivers the filter is first passed to the server. If the server is unable to fulfill the whole filter, then the whole filter (including the "range")  is passed back to your program. Your program then applies the filter on the client side to create the result set. If your program is unable to process the filter, then the filter is ignored and ALL the records are returned.

In the current era of security and data leaks this is not desirable. If your filter contains any user, or database values then it is possible for a user to bypass the filter by crafting specific bad values.

Object based drivers reverse this behavior. If the filter is invalid, and it cannot be processed on the Server then NO records are returned in the result set. This approach is safer and prevents data leaks. Incidentally it also makes finding the bug (ie invalid filters) easier to spot since "no data" is easier to spot (and more likely reported) than "too much data".

Related to this, if the driver is a SQL driver, then the filter is NOT passed back to the client for processing. Using client-side filters in prop:Filter (ie a filter designed to be passed to the server) should be treated as a bug as this impacts performance in a non-intuitive and non-obvious ways. Thus if the filter cannot be processed on the server, no data is returned. Programs should separate server-side filters from client-side filters. Server side filters belong in prop:Filter, client side filters belong in the NEXT loop.

This affects all the places where views are used including Browses, Reports and Process procedures.

SQL: Implicit DROP on CREATE

For ISAM drivers the CREATE command acts as a way of creating a new file on the disk, overwriting the file if it already exists. To maintain consistency of behavior, the traditional SQL drivers execute a DROP TABLE command as part  of the CREATE statement.

This has proven to be extremely undesirable in real life, mostly because the templates generate code which calls CREATE on a File not found (2) error. Due to networking, or connection, issues it was possible to get a false error when opening a SQL table. The template code would then call CREATE, which in turn issued a DROP TABLE which often succeeded. This has the effect of emptying existing tables which is undesired behavior in most cases.

The object based drivers do not issue a DROP TABLE command. If the Table exists then the CREATE will fail with an appropriate error.

If you wish to remove a table, and replace it with a new table then call REMOVE before calling CREATE. If the goal is simply to empty the table of existing data then call EMPTY.

Prop:Sql / Prop:SqlRowSet Disabled by Default

The prop:Sql (and more recent, prop:SqlRowSet) command have been a part of the Clarion language since SQL drivers were available. However these commands are extremely unsafe, and open the door to significant SQL Injection attacks. As more and more Clarion programs are exposed to the web, this becomes a serious problem.

To encourage safer coding, prop:Sql , and prop:SqlRowSet are disabled by default in object based drivers. A section on SQL Injection in this document discusses how these statements can be refactored to be safer, and how they can be enabled during the transition. The prop:Sql and prop:SqlRowSet statements should be replaced with the new SQL command.

EOF / BOF

The EOF (End Of File) and BOF (Beginning of File) commands were a part of the original Clarion language, but they were deprecated when Clarion for Windows was released. It's use has been discouraged since the Topspeed driver was released. The official documentation says;

The EOF procedure is not supported by all file drivers, and can be very inefficient even if supported (check the driver documentation). Therefore, for efficiency and guaranteed file system support it is not recommended to use this procedure. Instead, check the ERRORCODE() procedure after each disk read to detect an attempt to read past the end of the file.

As noted above, in a SQL context, use of this command is very inefficient. For this reason the command is not supported at all by object based SQL drivers, and the command returns an error NoDriverSupport (80) if used.

They are supported in the DOS2 driver, and other drivers may choose to implement the command.

A check of the shipping classes and templates, as well as a check of the most popular accessories shows that these commands are not used at all, so it is unlikely you will notice they are gone.

Prop:Disconnect

The prop:Disconnect feature has been removed. The DISCONNECT command offers a more nuanced approach to forced database disconnections. For more information see prop:Disconnect.

Behavior Changes

This section serves to highlight general behavior changes which, while unlikely to affect your program, are worth noting;

Command Description Traditional Behavior Object Based Behavior
Using INSTRING in prop:Filter The Clarion INSTRING command supports STEP and START parameters. The SQL equivalent sometimes[1] supports START, but doesn't support STEP.

If omitted Default values for STEP and START are Length(substring) and 1 respectively.
If STEP or START are not 1 then the whole filter is passed to the client for processing.
The values for STEP and START are ignored. They are treated as ,1,1 . If they are not explicitly set to ,1,1, then BugAlert is called so the code can be tidied up.
SEND(Sql) In very early versions of Clarion the SEND command was used to send SQL commands directly to the SQL database. This was deprecated in favor of the File{prop:Sql} syntax.  For historical reasons traditional drivers still support this form of the command in addition to adjusting the File Driver String Support for SEND(Sql) has been removed. SEND remains as a way to set a property inside the driver itself. It is a run-time way to adjust the File Driver String. Since the File Driver String, and Send messages are file driver dependent, you should consult the individual drivers for the commands they support, and the impact those commands have.

Note [1]: MS SQL and  Oracle do support START; SQLite, MySQL, PostgreSQL, Firebird do not. The base classes assume the SQL database does not support START. Individual drivers may support it. Consult the driver documentation.

Context

All software is written with a specific context in mind. The value you get from that software will largely depend on how well your context matches the context that the software is designed for. Which is not to say the software isn't useful in a different context, or that it won't work, but it perhaps won't run as fast in one context as in another.

The object based SQL drivers are designed to be as flexible as possible, and to do so they try and allow the programmer to explain the context in more detail, which in turn allows the driver to better adapt to that context. Failing additional cues it defaults to behavior that is consistent with the traditional drivers.

For example, a default context is that the database will be accessed by multiple processes on multiple machines. This is a common context, and the most flexible. However, your SQLite database (for example) may only be used by processes on a single machine. If that it the case then the /WAL driver switch can improve performance by several orders of magnitude.

Context can be applied at the design stage; for example a table that is written to a lot would be better served with fewer keys, whereas a table that is mostly read, or used for browses and reports, would benefit from having more keys. Data which is limited to a single database can be faster using auto-incremented values. Whereas data which is spread out over multiple databases (for example, installed on multiple customer premises, or copied to phones for offline use, or used with WEB API services) would be better served using UUIDs.

Some contexts can be set at run time. For example, reports are read-only, and typically read the entire result set, so benefit from read-only access and large result sets. Conversely browses typically don't need the whole result set, so using smaller pages is more efficient. The Performance section in this document discusses properties that can be set to improve the context values in the driver.

A number of properties in the class allow more and more work to be done on the server, and less work to be necessary at the client. In general this is a good idea because it reduces network traffic (which is both slow, and expensive) and makes fewer round-trips to the server, which in turn improves performance. In some situations though networking bandwidth may be plentiful, CPU time on the client may be cheap, but the server is the primary resource constraint. In this context it can make sense to reduce, or limit, the amount the server has to do, moving some of that work onto the client.

While the above are just a few examples, it becomes obvious that all advice, all techniques, and all code should be evaluated within your own context. Within the boundaries of your own system, minimizing the use of expensive, or slow, resources, and maximizing resources where they are available.

SQL Background

Files were part of the original Clarion product, when it was created on DOS. The name of the data structure, FILE, literally referred to the file on the disk. The data structure in the program was hence called a FILE and it has remained so ever since. In today's terminology it is more accurate to refer to it as a table, however the underlying Clarion structure is still called a FILE.

The original file driver specification was designed to deal with ISAM or Flat-File disk files. There was no concept of a result set such as with have with SQL databases. There were also no JOINs, and code that involved multiple tables ultimately just had multiple lines of developer (or template) code that "walked" through the FILEs. This approach while very efficient for ISAM files, is very inefficient for SQL.

Part of the goal of this new code is to upgrade the efficiency of working with FILEs and SQL - especially in hand-code, while maintaining backward compatibility.

VIEWs were added to the language (in Clarion for Windows 1.5), to deal with JOINs, filters, PROJECTs, and so on. VIEWs were designed to be (mostly) SQL friendly and have become even more SQL friendly over the years. FILEs however have lagged behind, which is a problem because a lot of existing hand-code makes use of FILEs not VIEWs, and programmers are reluctant to make sweeping changes to existing code. Most embed code still uses FILE structures, and as a result are much slower (and consume more server resources) than they need to be. For this reason object based drivers expose a lot more functionality at the FILE level which traditional drivers only supply at the VIEW level.

FILEs supported prop:Sql, which is something of a sledge-hammer approach to using efficient SQL with FILE code. There ideally should be something less brutal than prop:Sql, while at the same time expanding the SQL horizons of the FILE commands.

Object based drivers add more SQL friendliness to FILEs (although in some cases converting to a VIEW is still preferable.) If you have a typical SET  NEXT loop then be sure to read the section on SET / NEXT loops.

SQL Cursors

In traditional SQL drivers, cursors are typically used to create a result set on the server side, and then to iterate through that result set, or update records in the result set. This makes looping through records and updating retrieved records simple. Unfortunately this convenience comes at a price. Cursors can be "expensive" on the Server side - especially if the result set is large.

The primary benefit of a cursor is that it provides a "snapshot" of the data at a specific moment. For example, say you are drawing a report and records are being added, and removed, from the table, then a cursor delivers a "snapshot" of the data. Whereas iterating through the table (using multiple distinct SELECT statements, known as Paging) may result in an inconsistent data set. This can be a problem in situations where things need to "balance" (for example, say you were printing an entire general ledger, then records being added and deleted during the report would be a problem.) A single SELECT statement would solve this problem, but that implies storing the entire result set client-side, which would take some time to do, and would potentially require more RAM than is available.

One problem with cursors is that the cost of creating them is often unnecessary. In many cases in Clarion programs a cursor is created with a large amount of data, even though most of that data will not ever be used by the client. For example if a table contains a million rows, and you open a Browse procedure in your program, a cursor with a million rows is created on the server. If the user is there to check the top few entries, or to insert a record, then all the resources used to create the cursor on the server are wasted.

Equally the cursor may be long-lived on the server, again consuming server resources. If a Browse is opened, the cursor (with the whole result set) is created. But the user may remain on that browse for a substantial period of time, which in turn means the server resources are consumed for all that time (until the browse window is closed.)

An alternative to cursors is paging. In this approach a page of data at a time is fetched from the server. Each page is a distinct SELECT statement. The result of each SELECT is immediately moved to the client (your program) and then processed there. While the page size can be set, it is typically "not huge" and so the resources consumed on the server, and the memory space required on the client are kept low.

The primary benefit of this approach is that the impact on the server is kept to a minimum, and the primary cost on the client (the additional memory required to store the result set) is mitigated by adjusting the page size.

It's worth noting that while page-based approaches require the use of a primary key, your SQL tables should always have a primary key anyway, so this is not a major problem.

It's also worth pointing out that for (most all) existing ABC and Legacy programs this change has no impact on the running of the program (other than server-side resource and performance  improvements). The only difference would occur if you are setting the ISOLATION LEVEL for the driver to Repeatable Read or Serializable.

In summary, object based drivers do not routinely make use of Cursors, however this difference does not affect how your program is written, or how it behaves.

SQL Injection

When programs interact with SQL databases care must be taken to prevent unwanted, and unexpected commands from being executed. Techniques for executing commands in the database, not intended by the programmer, are known as SQL Injection attacks.

All Clarion drivers, both traditional and object based drivers, make use of a technique known as Parameterization. This means that the SQL statement is split into two parts, the command part and the data part. The database engine will not execute commands that are included inside data because it treats that part purely as data. This covers all the commands provided by the drivers. Almost...

An exception to this rule are {prop:Sql} and prop:SqlRowSet statements. These are passed to the database as-is and thus the programmer is responsible for sanitizing data used in such statements. For this reason the use of these statements should be discouraged, and only used when absolutely necessary.

Prop:SQL / Prop:SqlRowSet

Commands passed though the Prop:Sql statement are not parameterized, and hence are vulnerable to SQL Injection attacks. Despite this risk, prop:Sql has become popular among Clarion programmers as a way of bypassing the limitations (real or perceived) of the file driver layer. This has resulted in a lot of code that is vulnerable to these attacks. In the current era though, especially as programs are exposed to the web, a greater focus on data security needs to exist.

Not surprisingly most programmers are not security experts. Ideally the driver should prevent the programmer doing unsafe things by default, while allowing unsafe behavior in specific places where they are aware of the risks and have taken steps to mitigate the vulnerability.

To this end, object based drivers default to not processing prop:Sql or prop:SqlRowSet statements.

The goal is to reduce the use of prop:Sql in the program as much as possible, replacing it with safer, calls while still allowing the same level of functionality. Two strategies are employed to make the code safer;

Strategy Goal
Enhanced property support in SET Reduces the need to use prop:Sql for performance reasons.
New command SQL Allows for the equivalent of prop:sql and prop:SqlRowSet but with parameterized values

Obviously compatibility with existing programs, and traditional drivers is desirable, especially when transitioning from a traditional driver to an object based driver. During the transition the use of the driver string /UNSAFE=true (typically done in the dictionary, but can be applied globally) allows prop:Sql to work in the program as it does with traditional drivers. This bypasses all the new safety features and should be considered a very short-term measure.

Individual calls to prop:Sql should be inspected and the functionality  replaced using either the additional features in SET or by using the SQL command in place of prop:Sql.

Once all code has been inspected, and replaced then the /UNSAFE setting in the dictionary can be removed, or set to /UNSAFE=stop.

This process can be done on a file-by-file basis gradually removing the unsafe code, and /UNSAFE setting as you go.

To assist testers in locating unsafe usage of prop:Sql, the driver string can be set to /UNSAFE=stop. When set like this a call to a  prop:Sql will result in a STOP appearing (before the call is processed.) This makes it easier to identify where this unsafe code exists in the program.

Note: While not common, prop:Sql can also be used to return the most recent SQL statement from the driver. This feature remains. There is no restriction on using it this way, and the UNSAFE setting does not affect this use of prop:Sql. Inspecting prop:Sql after a call can help you understand and improve the SQL that the driver is generating to the database. See also prop:Explain.

Equally true however is that the SQL generated by the driver, and hence returned by prop:Sql can change with new versions of the driver. So this property should be used in a "short term" way, understanding that things may change with a driver update.

Check List

This section acts as a quick index to new, and updated commands. If you are familiar with fie drivers, and how they work, this provides a quick look at what is new and improved.

New Commands

Command Context Description
ADDRELATION File Declare a relationship between two tables.
BINDFIELD File and View Bind local variables to be used in result sets
BYTES64, POINTER64, RECORDS64 DOS2 Driver Support for large files in DOS2 driver
CLEARPROPS File and View Clear all properties relating to SET etc.
CONNECT / DISCONNECT Database Connection to a SQL database.
EXISTS File Check if a table exists in the database
EXPORT File Export the contents of a table to a text file.
FROMQUEUE / TOQUEUE File Move contents from a table into, and out of, a queue.
GETRANDOM File Read a random record out of the table
IMPORT , EXPORT File Import and export tables to and from CSV files.
JOINTABLE View Add a table to an existing VIEW structure
LOGGINGON, LOGGINGOFF, LOG, LOGBUFFER File and View Log commands and information flowing through the driver.
LOGOUTCONNECTION File A version of Logout that allows multiple transactions to be active at the same time
MERGE File Merge the data from one table with another table
PROJECTFIELD View Add a field to an existing VIEW structure
REOPEN File Change the access mode of the OPEN, without needing to do a CLOSE
SAVEPOINT File Create a save point in a transaction, which allows for partial rollbacks
SEARCH File and View Full Text Search across multiple columns in a table
SUBSET, EXCEPT, INTERSECT, UNION, UNIONALL File and View Subset filtering
SQL File and View Replacement for the prop:sql command
UPDATE File Update one or more records in the table, without the need to read the records first.
UPSERT File Insert if the record does not exist, update it if it does already exist.

Old Commands, New Features

Command File Or View Description
ADD / APPEND File Partial record writes to the database
COPY File Copy a table to another table
CREATE File Allows the program complete control over the SQL CREATE
DELETE File Pass a filter to the DELETE command
DUPLICATE File Supports non-unique keys
EMPTY File Empty a table in the database (exclusive access not required)
FLUSH File End a high-performance SQL section
Release any server, or client resources being held from a previous call to SET / NEXT.
PUT File Partial record writes to the database
RECORDS File and View Pass a filter to the RECORDS command.
Prime fields, and count instances of those field values.
RENAME File Rename a table in the database
Set / Next File Lots of new properties and control
STREAM File Start a high-performance SQL section

Files

Database Design

Many dictionaries have a long and storied history, and began their days with ISAM files. As object based drivers are primarily SQL based it's worth stating some design tips which should be applied to existing dictionaries for best functionality and performance;

Tip Context
All Tables need a Primary Key Everywhere. Modern systems require primary keys, and the object based drivers assume you have one.
Ideally (but not required), the Primary key should have one component. Where possible the primary key should have a single component. This is not a requirement, but  will improve performance.

Dictionaries coming from ISAM roots also tend to make use of features which are common in ISAM files, but generally have been frowned upon, or have not been available in the SQL context. Some of these limitations have now been removed.

Tip Context
Arrays Object based drivers support arrays, storing the data either in the native array format (if the database supports it) or in JSON if not.
Exclude Null Keys Previously not supported in SQL databases, now generally supported.
Giving keys unique external names Used to be a requirement, but now not necessary (and possibly hurtful) in object based drivers
STRING versus CSTRING There was a perception that STRINGs in the database needed to be converted to CSTRING. This is not true for object based drivers.
MEMOs and BLOBs Memos are supported by object based drivers. BLOBs are supported by all SQL drivers.
Record Length In Topspeed the maximum length of a single record is 15K, not including MEMO and BLOB fields.

In MsSql the maximum length is 8K, however VARCHAR and TEXT fields can be excluded from this calculation.

In PostgreSQL the maximum length is 8K[1], however VARCHAR, BYTEA and TEXT fields can be excluded from this calculation.

In SQLite (and therefore also in InMemory) the default size is 4k, but is configurable up to 64K. BLOB and TEXT are not included in the calculation.

Firebird defaults to 8K, but can be extended to 16K at the time of the database creation. As with the others, BLOB and TEXT fields are not included in this limit.

In summary, keeping the record under 8K in size is generally a good idea. However it's not a hard limit as  large strings (VARCHAR fields) are usually not included in this calculation. MEMO and BLOB fields are also not included.

[1] Technically the PostgreSQL value can be changed, but only by recompiling the actual server.

Arrays

None of the traditional SQL drivers support arrays. Object based drivers however do support arrays.

There are some limitations;
  1. Arrays cannot be used in Keys, and (generally speaking) are not easy to use in FILTER or ORDER attributes.
  2. Arrays of the GROUP, MEMO and BLOB data types are not supported.
Changing the size of an array in the dictionary may, or may not, trigger a table update (ie Error 47). Some databases set arrays to specific sizes, others do not.

Storage of array data in the actual database can vary from one database to another. How the data is stored in the database however is of little concern to the program, and won't matter when writing Clarion code. The rest of the section discusses different possible storage approaches, but this is for background information only and is not important when using FILE or VIEW structures.

Some databases (like PostgreSQL and Oracle) have direct support for arrays, some (Firebird) have more limited support, and some (SQLite, MySQL, MariaDb, MS SQL Server) make use of JSON functionality, storing the array as a JSON text string, and using JSON commands to interact with specific array values.

Auto Numbering

One of the joys of working with Clarion is that creating a Primary Key Identifier for each row in the table has always been trivial. By creating a field (a number, usually a LONG) in the table, and then creating a key on this field, and setting the key to auto-number, the underlying code would be generated to create unique values in this row. This approach, where the program creates the value, is known as Client Side Auto Numbering.

There are quite a few structural problems associated with this approach, and these became evident as time passed and bigger, and more complicated, systems were designed. One of these problems is the case where multiple programs are created (some of which may not be in the Clarion language) which are adding data. Relying on each of them to number the field correctly is not ideal.

Most SQL databases support the idea of an IDENTITY[1] column - that is one which is allocated by the server, when a record is added. Using that feature, the task of doing auto-increments is moved onto the server, and the client need not worry about the field at all. Server Side Auto Numbering is born. Since this option exists it is the work of but a moment to identify the field, and let the server increment the number for you.

Unfortunately it's not always quite that simple. Auto Numbering has other issues and while this solves one problem, it opens the door to others. Thus configuring your dictionary to use Server Side Auto Numbering can be tedious.

The primary problem is that the program may need to know what the auto-incremented value of the record just added is. In many cases the ADD, of a parent record (for example Invoice) generates a number which will immediately be used when adding child records (for example LineItems). (Happily the driver can sort this out for you.)

Secondly though, the templates still need to know that auto-numbering exists because when forms are encountered the parent record is added when the form opens. This then gets that unique ID value, so that child records can be added while the form is open (again, think of the Invoice / LineItems situation.) As long as rows are being added this works well, but it gets messy when an ADD is cancelled.

Thirdly, your SQL server may recycle numbers which have been used (but discarded). Or it may not.  Consult your SQL database documentation to determine what yours does. Hopefully you are happy with the default approach. Some object based drivers may offer a way to explicitly reuse or not reuse numbers.

Fourthly a LONG field type can hold a maximum value of just over 2.1 billion. So you will encounter problems with tables that accumulate data quickly, and may accumulate more than 2.1 billion rows over its lifetime (assuming row numbers are not reused.) Ideally you would just use an INT64 integer instead, but we don't (yet) have that as a natural data type in Clarion. (A REAL would be a good alternative in this case though.)

Fifthly some databases allow you to override the Server-side generation (by passing in a specific value) others do not. If a field should not be set by the program you can add the | READONLY | attribute to the External Name. When this switch exists, the field will not be included in ADD statements. It will automatically be set though when reading the table, and also after the ADD command is called.

All of the above serves as a warning that Server Side Auto Incrementing should be used deliberately, and with care. It works fine, but the boundaries of the usage need to be understood.

Note 1: Different database engines have different terminology for an auto numbered field. MS SQL Server calls it an Identity. PostgresSQL refers to it as a Serial, SQLite uses AutoIncrement, MySQL and MariaDB call it Auto_Increment, Firebird calls it a Sequence and so on. Regardless of the database terminology the setting(s) in the dictionary refer to it as ISIDENTITY.

Implementing Client Side Auto Numbering

  1. Create a KEY on the field, and set Auto-Numbering on the key to ON. Typically this is a single component key, and is the primary key for the table, but there could be more components in the KEY (as long as this field is the last one.)
  2. For SQL drivers you can make use of the \AutoInc and \PreAutoInc Driver Strings to create a SELECT statement which will most optimally prime the auto incrementing value for you.  If these are not used then generic code will be generated by the templates, which may not be optimal for your SQL database. Unless you are adding lots of records, in a short space of time, this performance factor though will not be important.

Implementing Server Side Auto Numbering

  1. Make sure Auto numbering is OFF for all keys containing this field. For server-side auto-numbering the key MUST NOT have auto-numbering option turned on.
  2. Instead, identify the field in question by setting the Field User Option
    ISIDENTITY=1
  3. In code, sometime before the record is added, tell the driver that the field exists. (If you are using an APP this is done for you by the template.) The code consists of two lines;  (It is documented here for hand-coded programs.)
    SomeFile{PROP:ServerAutoInc}
    SomeFile{PROP:ServerAutoInc,n} = m
    Where n is the field number and m is the column number of the returning SQL code.

    OR
  4. (Use this step in place of step #3). Object based drivers can automatically set this property, if the | ISIDENTITY attribute is added to the field external name. This is somewhat less work than the step #3, and also doesn't require the program to add code to the program. Thus it works equally well in App-generated, or hand-coded programs. This step is in addition to, not in replacement of, step #2 above.
  5. If you want the templates to treat the file as server-side-auto-numbered, specifically if you want forms to ADD the record on form open, then you need to add a File User Option to the table in the dictionary;
    EmulateAutoNumKey=1
The mechanism for populating the assigned value, and retrieving it by the program, has been different for each database engine over the years. However recently the RETURNING (or OUTPUT) SQL clause has been added to most common databases and so this is the suggested and preferred way for object based drivers to get the value (and the current drivers use these features.) This does limit access to earlier versions of the database engine though.

Database Version Year
MS SQL Server Sql Server 2005 2005
PostgreSQL 8.2 2006
SQLite 3.35.0 2021
Firebird 2.0 2006
MariaDB 10.5 2020
MySQL 8.0.27 2021

Using the RETURNING clause incurs a slight (and the emphasis is on slight) performance penalty when doing an ADD. For this reason the APPEND command does not include the RETURNING clause and thus maintains its status as A Faster ADD. This makes APPEND suitable for things like bulk adds where the identity value is not needed in the procedure. Tables that do not have an auto-numbered field do not incur this performance penalty on ADD.

After a call to ADD or PUT, the identity field in the table record is updated to match the value in the database. (In other words, after the write, the value in the file record matches the database, regardless of it's value before the write.)

You cannot change the value of an IDENTITY field by using PUT. Values in any IDENTITY (or READONLY) fields are not sent to the server, but are rather updated from the server value after the read.

UUID's

Auto Numbering (server or client side) presents a number of other issues as well. This is not the place to discuss them in depth, but in no particular order they include; Switching from an Auto Numbered field to a Client-Side UUID field overcomes these problems.

Aside: You may have seen GUID, or UUID, values represented like this; 75163EA6-76C8-473D-A619-E9F30B489912. While this is a common way of displaying a UUID, it's not actually stored like this. The stored value is a binary string, this display is the binary value, but has been hex-encoded for viewing.

Client Side Generated UUID values

While there are variations, you can consider a Universally Unique Identifier (UUID) field to be a random string field, which is 16 characters long. You add it to your record structure as a String(16).  Because the value in it is random, or at least a significant part of it is random, data can be easily added by multiple different systems at the same time, and all the issues described above simply go away.

Object based drivers support setting a field in the dictionary to be a UUID. The driver will automatically populate the field (if it is blank) or use the value in the field (if one is pre-set). In other words, other than the file declaration, no program code is needed to support UUID's.

There are four primary arguments not to use UUID's but to use Auto Numbering instead;
  1. Since the UUID consumes 16 bytes[1] instead of 8 bytes (for a BigInt) or 4 Bytes (for a Long), and because primary keys propagate into child tables, and many keys, a database based on UUID's will consume more disk space than one based on numbers. While this is a valid point, your context will matter. For example wide tables (tables with many columns) will likely take up space anyway, and an extra few bytes per row will not make a significant differencw. Also, disk space is cheap, so how much this size difference matters overall to a system is doubtful. In most cases the percentage increase in size use is likely to be minimal.
  2. Since UUID's are not numeric they require more CPU time to process (comparing, sorting etc). True, but again, whether it is significant or not depends on your context. Most CPU's are good at dealing with (short) strings, and this is seldom the bottleneck in database operations. Using UUID's is a common practice, so databases are optimizing for this which in turn reduces this performance penalty.
  3. Most databases use (by default) the Primary Key as the Clustered Key. Meaning that records are physically laid out on disk in this order. This can be more efficient when processing records in that order. Since UUIDs are random, they tend to be added out-of-order. Which can make ADD slower. This can be mitigated by choosing another key to be the clustered key. This may be advantageous if supported by your database - not all of them do.  If changing the clustered key is not an option then a specific form of UUID (version 7 - see below) deals with this by pseudo-ordering ID's. So this is a non issue when using that form of the UUID.
  4. Collisions: Since randomness is involved, and multiple databases could be adding data, it's possible to get two records added with the same UUID, which in turn could be a problem. However while this is possible, it's not probable. Regardless of the UUID version chosen the random numbers generated are huge. (Think of it as rolling 32 dice, and having them all come up 6 at the same time.) Probability is a complex subject, and humans are not good at comparing probabilities, but it's fair to say you have more risk of being hit by a small rock falling from the sky (ie a meteorite) than of encountering a collision in UUID space.
There are 3 different kinds of UUID supported by the driver;

Attribute Behavior
UUID4 The UUID conforms to the standard UUID format, version 4. This contains the most amount of randomness (122 bits worth.) The value in the string is a binary value, so cannot be displayed on the screen. This is fine for most systems, but if you are exporting the data to a text output you will need to encode the value. [2]
UUID7 The UUID conforms to the standard UUID format, version 7 This contains significant randomness (74 bits worth per millisecond.) The value in the string is a binary value, so cannot be displayed on the screen. The primary advantage of this format is that the high bytes are an increasing value, based on the time. So data will be added to the end of the table, thus avoiding the clustering problem.
UUID8
UUID
Version 8 of the UUID standard allows programs to create the UUID in any form they like. Mandating only that two nibbles in the UUID are set to specific values. The version of UUID supported by the classes does not even do that. The UUID8 implemented in these classes simply populates the string with a alpha-numeric random text value. This results in around 80 bits of randomness. Since it is Alphanumeric it can be exported and imported (to JSON, Excel etc) without the need for additional encoding or decoding. In addition since it is not binary, it can be stored ina simple Char or VarChar field in the database, which makes sorting and searching easier for the database engine.

To set a field in the dictionary;
  1. Create a String(16), (or String(36) )typically (but not necessarily) called GUID. (Do not use a CSTRING field type here.)
  2. Set the External Name of the field name, adding one of the following attributes; | UUID4 | , | UUID7 | , | UUID8 | .
    Setting the attribute to UUID4 or UUID7 implicitly makes the field | BINARY | (You do not need to add the attribute here.)
  3. IMPORTANT If you are using UUID4 or UUID7 then any related field, in another table, needs to have the attribute | BINARY |. That field won't have the UUID attribute (because it is populated by the parent field and so should not be generated automatically here) but since the string can contain binary characters it should be labelled as | BINARY |. This is not necessary for UUID8 because UUID8 values are text, not binary.
When adding new records simply clear the GUID field. The driver will populate a value in the field for you when ADD or APPEND is called. If you wish you can prepopulate the field with a value, and that value will then be used during the ADD or APPEND.

The following code generates a UUID version 7, and saves it in the Cus:Guid field.

Cus:Guid = Customers{prop:uuid7}

Note 1: Guids are often displayed as a 36 character Hex-Encoded string. However this is just a display convention - the actual data is stored in 16 bytes.

Note 2: StringTheory has several methods for encoding Binary values as text. Including ToHex, Base64Encode and UUEncode.

Windows GUIDs

As mentioned above UUID values are a 16 character binary string. This format is compact and efficient for databases, but less efficient when displayed on the screen since the UUID value is binary, and binary strings do not usually display well. For this reason, on the Windows platform, UUID's are usually displayed as a 36 character hex-encoded string, with four hyphen separators.

The object based drivers are aware of this, and contain a built-in transformation - allowing the string declared in the dictionary to be a string(36) and the value in the database to remain as a 16 byte binary value.

This behavior is automatic IF the field declared in the dictionary (and marked with one of the UUID attributes) is a String(36). If you don't want, or need, this functionality then declare it as a String(16) instead.

ADD and APPEND

Using traditional drivers, the whole record is inserted when a ADD or APPEND is done. The object based, SQL drivers support prop:Project. This means that only the fields listed in prop:Project will be written to the database when an ADD or APPEND is called. This can be faster if only a few fields in the row are being set during the write.

In the context of an ADD and APPEND the Projected fields are the fields being projected into the server from your program.

If supported by the driver, when a ADD is done, all server-side fields (identity, and read only fields as well as any not-projected fields which have default values) in the record are reset to their current server-side value. APPEND does not read the server side values, so is slightly faster if the server-side values are not needed.

See also PUT.

ADDRELATION

AddRelation(File pChildFile, File pParentFile, Long pOnUpdate, Long pOnDelete, *? pChildField1, *? pParentField1, <*? pChildField2>, <*? pParentField2>,...)

The ADDRELATION command informs the FILE of a relationship to a parent file. Relationships are always defined in the child file, not in the parent file. In other words all relationships are defined as Many-To-One (Many child rows to one parent row.)

Parameter Description
pChildFile The relationship is always defined form the child tables point of view. The child requires that the parent row exists. Many child rows can point to the same parent rows. (In some cases only 1, or indeed no, child rows point to a specific parent row.)
pParentFile the parent table in the relationship.
pOnUpdate The action to take if the parent field value changes. In other words if the parent field (say CustomerId) changes, then all the CustomerId fields in the children can have an action. This action is one of driver:cascade, driver:clear, driver:restrict, driver:NoAction.
pOnDelete The action to take if the parent row is deleted. In other words if the parent row is deleted then all the children rows can have an action. This action is one of driver:cascade, driver:clear, driver:restrict, driver:NoAction.
pChildField1 The field in the child, which matches the field in the parent (defined in the next parameter). These fields must be of the same type.
pParentField1 The field in the parent which matches the field in the child (defined in the previous parameter.)
... Up to 10 pairs of fields are supported.

Some databases (specifically SQLite) require that for best performance the call to ADDRELATION is done before the call to CREATE. In all cases it is ideal to add relationships early in the design life cycle, before the tables have been created, or data has been added. Adding relationships to existing databases can fail if the data in the tables contains values that would violate the relationships.

If relationships have been defined in this way, and CREATE is called on the child table, then the existence of the parent table will be checked, and it will also (if necessary) be created. Databases require that parent tables are created (or exist) before child tables can be created.

When the Update or Delete action is set to driver:Clear, this does not mean that strings are set blank, or that numbers go to zero. It means the value of the field will be set to NULL.  If the field itself is set as NOT NULL then an error might be generated when the table is created, or (more likely) you'll get an error when deleting the parent record. Deleting the parent record, would result in NULL being assigned to the field, but since the field is defined as NOT NULL, this fails, which in turn causes the DELETE to fail.

For more on relationships in general see Relationships.

BINDFIELD

BindField(FILE pFile,String pName, *Long pField)
BindField(FILE pFile,String pName, *Real pField)
BindField(FILE pFile,String pName, *String
pField)
BindField(FILE pFile,String pName, *Cstring
pField)

BindField(VIEW pView,String pName, *Long pField)
BindField(VIEW pView,String pName, *Real pField)
BindField(
VIEW pView,String pName, *String pField)
BindField(
VIEW pView,String pName, *Cstring pField)

Parameter Type Description
pFile / pView File / View The label of the FILE or VIEW to which the field is being bound.
pName String The name of the field being bound. This should be the name as it will appear in the SQL result set.
pField Long, Real, String, CString A local variable, one of a Long, real, String or Cstring to hold the result. When the record buffer is updated (on a call to NEXT, PREVIOUS) the local variable will also be updated.

Using traditional drivers, when looping through a FILE or VIEW, fields being returned are placed into the file record buffer.
However when accessing SQL databases it's possible to SELECT values which are not in the file record. These might be calculated values, or totals or whatever.

For example using  prop:Project a specific set of fields could be set to be retrieved by a SET / NEXT loop. For example;

Customers{prop:Project} = 'Cus:CustomerCode,COUNT(*) AS CustomerRank'

In this case two fields are being projected, the normal field Cus:CustomerCode and also a calculated value which is placed into a field called CustomerRank.

If the table contains a field (in this example Cus:CustomerRank) which can hold the calculated value then that works well. However the table may not contain such a (spare) field. In this situation using BindField to add an external value to hold the result is necessary.

For example

CustomerRank   Long
  Code
  BindField(Customers,'CustomerRank',CustomerRank) 


Then that field can be used to hold the return value in the result set. In this case, when a record is read (by the NEXT or PREVIOUS command) the local variable CustomerRank will be populated with the value in the result set. In effect the local variable has be "bound" into the record for the duration of the loop.

Only fields of type Long, Real, String or Cstring can be bound in this way.

Fields bound in this way can be used in the prop:Order property. They are not applicable to the prop:Filter property.

Local variables used in prop:Filter should be bound, as normal, using the Clarion BIND command. Conversely local variables bound using BIND have no meaning in a prop:Project or prop:Order property. In other words BIND and BINDFIELD are different, and should not be conflated together.

The binding is cleared when the FILE or VIEW is closed, or when ClearProps is called.

BINDFIELD is supported in the commands NEXT, PREVIOUS, GET , GETRANDOM  and REGET.

Blobs

Blobs are a source of common problems in Clarion, because they don't behave like regular fields and variables. The goal of this section in the document is to clarify behavior, and to highlight places where the behavior may vary from the behavior of traditional file drivers.

In the table below aBlob is a blob reference, assigned as
aBlob &= table{prop:blob,-n}

fil:blob is the blob as declared in the dictionary


Code Traditional Driver Object Based Driver
s=aBlob{prop:Size}
s=fil:Blob{prop:size}
The current size of the blob.
s=table{prop:size,-n} blank The current size of the blob
{prop:Size} = x
Prop:Size does not need to be called before setting the blob contents. This behavior is not changing, however deviates from the blob documentation.
{prop:Size} = x
When setting the Blob Size, new space is not cleared. New space is cleared to nulls.
s=aBlob{prop:value}
s=fil:Blob{prop:value}
s=table{prop:value,-n}
The current contents of the blob are returned.
aBlob[0:3] = 'abcd'
fil:Blob[0:3] = 'abcd'
Assigns text into the blob. Makes the blob bigger if necessary.
aBlob{prop:value} = 'abcd'
fil:Blob{prop:value} = 'abcd'
table{prop:value,-n} = 'abcd'
Assigns the text into the blob. Replaces the existing value completely and adjusts the size up, or down, as necessary.
CLEAR(FILE)

BUFFER

The traditional drivers make use of a BUFFER command which allows the FILE or VIEW to buffer some number of records, either some already read or some number of future records. This can have a dramatic impact on performance, although the command is only used in the ABC ViewManager, in the shipping classes and templates.

BYTES64

Bytes64(FILE pFile),BIGINT

The BYTES command returns a LONG, which limits the number to around 2.1 billion. This may not be enough (for example the DOS2 driver supports files greater than 2 Gigabytes). The BYTES64 command is a new command which returns a BIGINT variable (at the time of writing this, this is a REAL.)

If this command is called against a traditional driver then the regular BYTES command is called, and a value less than 2.148 billion is returned.

See also Pointer64 and Records64.

CLEAR

The CLEAR(File) command clears the record buffer. It It is recommended that this form of CLEAR is called rather than the CLEAR(fil:record) form.

Using CLEAR(File) clears the record, the blobs and also clears all the properties by calling ClearProps (clears prop:Filter et al) which are supported for many of the methods.

Note - If calling the method in the object directly then use the method name ClearFile instead of the word Clear.

CLEARPROPS

CLEARPROPS(FILE pFile)

The CLEARPROPS command  clears all the properties (prop:Filter et al) which are supported for many of the methods.

It does not clear the record buffer.
It does not clear prop:Sql.

It is recommended to call this command before COPY, MERGE, SET, RESET,

This command also sets the Error code back to No Error (0).

COMMIT

COMMIT(FILE pFile)

If LOGOUTCONNECTION is used to start a transaction, then COMMIT MUST be called with the primary table. Calling COMMIT with no parameters in this situation will cause a GPF.

For more information on  commit see Transactions.

Context

There's a popular saying;

"If you get get five database administrators in a room, and ask a SQL question, you'll get ten opinions."

The primary reason for this is that SQL databases operate in many different contexts. And almost all SQL advice is context specific. Things that are optimal in one context may be sub-optimal in another. When reading, or getting, advice, it is important for the expert to understand your context, and to be offering advice that matches your context. Be careful of implementing advice which may be good in one context, but is not good in yours. Here are some broad, common, contexts. Your context may overlap some of these, or may be quite different.

These are just some of the context considerations. How you design your database, and how you deploy your database, will depend largely on your context. When advice is taken out of context then "myths" evolve.

For example, it's a common Clarion SQL myth that "tables should not have keys declared in the dictionary. The database will determine the best keys based on usage, and will create them as necessary." This advice varies between sort-of-true, and not-true depending on your context. Firstly it's important to understand that keys improve read performance, at the cost of some write performance, and some disk space. Regardless of context, keys are important for fast read performance. What can vary greatly is how those keys are determined, created, and maintained. If a DBA is overseeing the database this job generally falls to them. If there is no DBA this job generally falls to the program. The SQL database does not do it "by itself".

When it comes to database maintenance, your program runs in one of two contexts;
  1. Your program "owns" the data. It creates the tables, looks after them, updates them, alters them, and so on. In this scenario the dictionary is the source of "truth" - as you update the dictionary so the database updates as well. If you are coming from an ISAM (TPS) background this is the context you are used to operating in.

    In this context if you add a table in the dictionary it can be added to the database. If you add keys to your FILE structures, they can be created in the database and so on.
  2. Your program accesses the database, but does not own it. This is a common situation when the program is merely accessing a database that is updated and maintained by either a human (a DBA) or another program. In this context the program adapts to the database, not the other way around. The database itself is the source of truth.

    In this context the FILE structure, and keys can be a subset of what actually exist in the database. You can define additional keys in the dictionary, but these are not propagated into the database itself.
It is useful for the program, and by extension the driver, to know which of these two contexts are in play. The driver can do work to ensure that the database structure is kept up to date, however for backwards compatibility reasons this feature is OFF by default. To turn this feature on set the Driver Option, either at the global level or at the table level. It can also be set globally, and overridden at the table level if desired.



The OPEN command checks that the Dictionary field set is a subset of the Database field set (ie fields can exist in the database that are not in your dictionary.)  It checks that Primary and Unique keys declared in the dictionary are declared in the database. Other than that, extra keys in the dictionary or in the database are ignored.

CONNECT

Connect(FILE pFile, LONG pOptions=0)

The CONNECT command is not supported by traditional SQL drivers. It is new to object based drivers.

The object based driver typically connects to the database when a command requires it. So the CONNECT command is not usually called in code. However there are situations where a connection to the database is desirable, even if the code is not ready yet to perform a command. In this case the CONNECT command can be  called.

The CONNECT command can take an Options parameter, which is File Driver dependent. See the specific driver documentation on what can be used here.

See also DISCONNECT.

Connection Pooling

Inside the driver, at a fairly low level, the driver has to "connect" to the database. This is known as a connection. Typically the programmer does not need to understand how this happens, or how it works, it just happens.

For various performance and resource reasons multiple driver objects can share the same same connection. Again this is done automatically by the driver, and in most cases there's no reason to modify this behavior. In some cases though it can be useful to prevent objects from sharing the same connection, and so some control over the pool is offered.

Firstly it's worth understanding when connections will be pooled, and when new connections will be created;

Connections are only pooled if the objects are on the same thread, and if their owner strings are identical. Even a small change in the owner string will trigger a separate connection, so that is one way to control whether tables will share a connection or not. In addition to the owner, a new property, prop:Pool, also exists. Connections will only be shared if the tables have the same prop:Pool number. The default value for prop:Pool is 0.

Therefore, if there are two tables, and it is desirable for them not to share a connection, then the prop:Pool for one of them can be set to a non-zero number.

prop:Pool is automatically set back to 0 when CLOSE is called

Pooling also comes into play when transactions are involved. After a transaction is started, then all writes, to all tables sharing that connection, are part of the transaction (whether the file was listed in the LOGOUT command or not.) Therefore by setting prop:Pool to a common (non-zero) value before the tables are opened, the tables can be "isolated" from the other tables. In most cases there will be no gain from this change, but in some cases, where some writes must occur outside of the transaction (while a transaction is underway) this property can be useful.

If a table is open, and you wish it to change to a new connection, then set prop:Pool, and call CONNECT. It should be noted that if the file is open when this happens, it will be closed. For this reason it is best done before the file is OPENed.

If for some reason it's desirable to get the actual connection handle, this can be retrieved using prop:Hdbc.

COPY

COPY(FILE pFile, STRING pNewFileName)

The COPY command is not supported by traditional SQL drivers. Object based drivers allow the COPY command to be implemented by the driver. 

The COPY statement duplicates a table in the database. The table to copy can be open or closed. If the new table name  is the same as the current table then  the COPY statement is ignored.

If the destination table already exists, then it is DROPPED first (and all existing information in the table is lost). If you want to merge (not copy) the table, then see the MERGE command.

Note that this only copies the table structure, and data. It does not copy indexes, relationships, triggers and so on.

All supported properties should be set (or cleared) before the call to COPY .
Calling CLEAR(file) or CLEARPROPS(file) will clear all the properties.

The COPY command supports prop:Project , prop:Filter , prop:SQLFilter, prop:Limit, prop:Offset. If a subset of the fields are projected, then the destination table will set the remainder of the fields to be their default value, which may be NULL. The destination table when created will include all the columns present in the source table, regardless of whether all the columns are projected or not.

All the properties are cleared at the end of the call to COPY, regardless of success or failure.

CREATE

The CREATE command is seldom used with traditional SQL drivers because it lacks the flexibility and power most people require when creating SQL tables in their database. In an effort to make creating tables simpler, the CREATE in object based drivers is more powerful, and safer to use.

Traditional SQL Driver Object Based SQL Driver
Does an implicit DROP TABLE before the CREATE TABLE statement. This conforms to the behavior of ISAM drivers, however is problematic if the OPEN command incorrectly returns a File Not Found (02) error. This problem has lead to many tables in databases being inadvertently dropped. Does not do a DROP TABLE . If the table already exists then CREATE fails with a File System Error (90) .

If the table should be dropped then use the REMOVE command before calling CREATE.
No control over how the field is created in SQL. Supports Extended Name Attributes including SQLTYPE, COLLATE, BINARY, NOTNULL, DEFAULT, ISIDENTITY, UUID and more.
Does not support Server Side constraints Supports server-side constraints using Extended Name Attributes.
Does not support Relationships Relationships can be defined using the ADDRELATION command. If this is done before CREATE then foreign keys, and foreign key constrains (and actions) are created.

Custom Properties

Object based drivers support the setting, and retrieval of custom properties.This allows you to name, and set a property and associate it with the table. The property is valid for the life of the object (ie is limited to the thread if the file is threaded.)

To set a property the syntax is;

file{'someproperty',index} = 'whatever'

The index parameter is optional, if omitted it defaults to 0.

To retrieve a property the syntax is

something = file{'someproperty',index}

again, the index parameter is optional and defaults to 0.

The property name is not case sensitive. If the name is a number then care should be taken not to overlap with property numbers defined in property.clw or CWDriver.inc. (For best future-compatibility do not use a number as the name.)

Custom properties are useful because they are attached to the object, meaning that they are in scope in any place where the FILE is in scope. So they reduce the need for parameters (attached to the file) to be passed.

DELETE

Delete(FILE pFile)
Delete(FILE pFile,String pFilter)

The traditional command DELETE(File) works as it always has. It deletes the most-recently-read record (which has been retried using GET, REGET, NEXT or PREVIOUS.

Object based drivers extend the command allowing a filter statement to be passed to the command. This allows multiple records to be deleted at the same time.

If supported by the driver, prop:Affected is set with the number of rows which were deleted.

To delete all the records in a table see the EMPTY command.

Note: OVER and ARRAY fields cannot be used in the filter.

DISCONNECT

Disconnect(FILE pFile)

The DISCONNECT command is not supported by traditional SQL drivers. Traditional drivers do support file{prop:Disconnect} however in traditional drivers that command applies to all FILE structures, and across all threads, for that driver. The DISCONNECT command does not behave like that.

In object based drivers, calling CLOSE on a table disconnects the FILE from the connection manager. This won't necessarily close the connection; that will only be done when the last table using the connection is closed. Also, some commands can be executed when the file is in a closed state - and these will necessarily make a connection. Therefore, if you use a command, and the table is not open, you may want to call CLOSE or DISCONNECT to release remove the table from the connection.

It is not absolutely necessary to call either CLOSE or DISCONNECT. If a File Object goes out of scope[1], then it will automatically be CLOSEd and DISCONNECTed for you.

Note 1: The file object goes out of scope when the FILE structure goes out of scope. The FILE structure goes out of scope as follows;
a) If the FILE is declared locally in a procedure, then it goes out of scope when the procedure ends.
b) If the FILE is declared locally in a MODULE then it behaves as if it is global.
c) If the FILE is declared globally, with the ,THREAD attribute, then it'll go out of scope when the thread ends.
d) If the FILE is declared globally, without the ,THREAD attribute, then it'll go out of scope when the program ends.

Prop:Disconnect

Traditional drivers support using prop:Disconnect to close the connections to a database. This is a fairly dangerous command as it applies across all threads, and to all tables using the same database. It is primarily used when the developer is sure all tables are closed (on all threads) and is used to preempt program hangs caused by server-side disconnections[1].

In object based drivers, connections are handled differently.[2] In this situation connections are threaded (ie each thread makes use of its own connection) and the DISCONNECT command allows a specific table to disconnect from the database, however the actual database connection is not disconnected until the last table object has disconnected.

For this reason, in object based drivers, the call to prop:Disconnect has been deprecated. If a call to prop:Disconnect is made, then no action will be taken.

Note 1: The main use of prop:Disconnect is to prevent program hangs caused by server-side disconnections. These occasional hangs are a serious inconvenience to programs using traditional SQL drivers. Since object based drivers use a different connection management approach, which do not suffer from these hangs, this solution is not necessary anymore.

Note 2: This next paragraph is technical, and of no interest to most programmers. Feel free to ignore.
There is a new class inside the class called the DriverConnectionPoolClass. A threaded instance of this class exists inside the driver, so each thread has its own connections. Different tables connecting to the same database, with the same database options, share the same connection. When the last table disconnects from the database then the connection is automatically closed.

Prop:Distinct

Before calling the SET command, the prop:Distinct property can be set. This will add the DISTINCT clause to the SQL SELECT statement. This tells the server to only return unique records, duplicate records are not included in the result set. Object based ISAM drivers may, or may not, implement this property so check driver documentation.

customers{prop:Distinct} = true
Set(Cus:NameKey)
Loop ...


This property is not reset to false except on a OPEN , CLOSE , CLEAR and CLEARPROPS.  If you wish to re-use the FILE with this property off, then you must set it to false (if you previously set it to true).

It is not necessary to set prop:Paging to false when prop:Distinct or prop:NotDistinct are used, then paging is automatically disabled.

Rows in the table are unique. So retrieving all the fields in a row will mean that prop:Distinct will have no effect (since all the rows will be fetched.) Therefore, in the FILE context, prop:Distinct is only useful if you are using prop:Project to retrieve only a subset of columns, and those columns do not include all the fields in any unique index.

Since there is no row-identifying field in the result set, the results of this request are automatically set to be read only (so no PUT calls are allowed.)

Expanding the example above;

customers{prop:Distinct} = true
customers{prop:Project} = 'Cus:Lastname,Cus:Age'
Set(Cus:NameKey)
Loop ...


This will return a list of names and ages, but without any duplicates.

If the prop:Distinct is set, but the prop:Project is not set, then the prop:Distinct is ignored and the implicit prop:Readonly is not applied.

See also prop:NotDistinct.

Driver Options String

Clarion traditional drivers support two ways to send commands to the driver. The first is via the Driver Options String - which is typically set in the dictionary. While this can be a variable (as it is with the Owner and Full Path Names) is is usually coded to a fixed value by developers.
In addition to this is the SEND command, which allows commands to be passed on an ad-hoc basis.

One disadvantage of the dictionary driver string is that it is set on a table-by-table basis. Which means that for some settings the driver string has to be duplicated on every table (or, at the very least, a global variable has to be set for each table.)

Object based drivers (in Clarion 10 and later) introduce an additional way[1] to set driver options; by setting a user-defined system property. The system property is named Driver:xxx where xxx is the name of the driver. So, for example, the SQLite2 driver uses the property system{'Driver:SQLite2'}.

Example

System{'Driver:SQLite2'} = '/master=Dict'

System properties are not threaded. Thus you can set the property once, presumably when the program starts, and it will then apply for all tables, and on all threads, using that file driver.

Of course the regular file driver string remains. It is evaluated after the system property. If the same setting is in the system property and the file driver string, then the value in the file driver string will be used. The file driver string can still be set on a table by table basis, and if a variable is used, then the variable can be threaded.

The SEND command can be used to send additional commands and instructions to the driver (as has always been the case). This naturally overrides the existing value in the setting,however care should be taken with regard to when the SEND command is used. The system property and file driver string are evaluated at specific points in the code[2], and this effectively "resets" the properties mentioned in the system property and file driver string back to their original setting. And SEND commands on these properties will be reset. Other SEND commands will not be reset.

The exact options available to the developer depend on the driver. See the driver documentation for specific options and values.

While the driver strings and SEND command are useful to set behavior in the driver, it's also desirable to set properties in the database engine itself. These options are very database specific, and may even vary between one version of a database and the next. For the options available to your database see the database documentation. (For example, in SQLite the documentation is at https://www.sqlite.org/pragma.html .)

The object based drivers support these options at all three levels - in the system property, the file driver string, and using the SEND command.

Example - system property;

System{'Driver:SQLite2'} = '/AUTOINC=TRUE /pragma_cell_size_check=true /UNSAFE=Stop'

Example - file driver string;

Customer             FILE,DRIVER('SQLite2','/AUTOINC=TRUE /pragma_cell_size_check=true')

Example, SEND command;

SEND(Customer,'/AUTOINC=TRUE /pragma_cell_size_check=true')

Some database commands are not instructions to set a setting, but rather a request to get the current value of a setting. For example in SQLite the command PRAGMA data_version; is a request to get the current value in data_version. Including this in a driver string doesn't make a lot of sense (since it would happen on every OPEN) however it is useful to use with the SEND command. The results of the call are returned by the SEND command.

d = SEND(Customers,'/pragma_data_version')

Common Driver Options

Option Default Value Description
/ALLOWDETAILS

/EXPLAIN=true false When this is on then ALL SET / NEXT loops for the FILE, and all VIEW structures, where this is the primary file, will trigger a Query Plan Explanation. Use this while developing to determine where additional keys are needed to improve performance.
/LOGFILENAME

/LOGMESSAGE

/MASTER=n
database
n is one of ;
database : the database is the source of truth. Field and key changes in the dictionary will not be propagated to the database. (The program will Error 47 if a field is in the FILE structure, but not in the database)

dict : The dictionary is the source of truth. The database will be updated with new fields and keys as they are added to the dictionary. Old fields and keys will not be removed.

dictfield : Fields added to the dictionary will be added to the database table. Keys added in the dictionary will not be added to the table.

dictkey : Keys added to the dictionary will be added to the database table. Fields added to the dictionary will not be added to the table. (The program will Error 47 if a field is in the FILE structure, but not in the database).

strictdict : As for dict, but fields and keys removed from the dictionary will be removed from the database.
/TURBOSQL=n false n is true or false. When true no file structure check is done when the table opens or is used.
/UNSAFE=n false n is one of;
false : Unsafe use of prop:Sql is not allowed.
true : Unsafe use of prop:Sql is allowed
stop : Unsafe use of prop:Sql will trigger a STOP, and then the prop:Sql command will proceed.


Note 1: This is only available in Clarion 10.11975 or later. User system properties were introduced at this time.

Note 2: The system property and driver string are parsed and processed at two points in the life-cycle of the FILE structure;
  1. When the object is initialized. This happens when the file is "touched" in any way (by any command or property) for the first time (on the thread, if the FILE is threaded.) In other words, this can be thought of as happening once per thread, when the FILE is first used on the thread.
  2. When the file is opened. Indeed every time it is opened. This has the potential to reset options set with SEND which occurred before the OPEN.

DUPLICATE

The  DUPLICATE command has been extended to support non-unique keys.

From the Clarion documentation;

The DUPLICATE procedure returns a non-zero value (true) if writing the current record to the data file would post the "Creates Duplicate Key" error.

The current behavior remains, but with a small extension;

If the DUPLICATE(key) form of the command is called with a non-unique key (ie a key with the DUP attribute) the command will return true if a "matching" record already exists. However writing to the disk in this case will not cause an Error. 

Calling it in the form Duplicate(FILE) only tests unique keys. The above extension to the behavior only applies when called with a non-unique-key as the parameter.

EMPTY

Traditional drivers require that the file be open in Exclusive mode to use the EMPTY command. Object based drivers do not have the same limitation. You can empty the table regardless of the open mode.

EMPTY removes all the records from the table. To remove one, or more than one record, see the DELETE command.

Note: If you are using server-side relationships then the use of EMPTY can be damaging. If called when relationships are active then the defined On Delete behavior may be executed by the database. This includes cascading deletes, clearing of child fields, and so on. Indeed the call may fail for one, or more, rows if there is a Restrict limitation placed on deleting the row. This may not be intended behavior.

If the call to EMPTY is part of a wider process (to drop the data, and then re-populate it) then strong consideration should be given to suspending relational updates (using SUSPENDRELATIONS). For more information see the section on Relationships.

A similar issue exists when using the REMOVE command.

Exclude Empty Keys

A lot of programs get to SQL via an ISAM route. They start off as Topspeed files, and at some point the system changes to being SQL. Converting programs like this can be challenging because some features of Topspeed are not supported by the traditional SQL drivers. One of these features is the creation of Keys that make use of the option Exclude Empty Keys.

Unlike Arrays and Memos (two other common issues when converting from Topspeed) understanding the functionality provided by the key, in this application, is critical in duplicating that functionality in SQL. Different programs use the key to provide different functionality.

Compact sort ordering

The original use of the key was to create compact sort orders on the disk. By removing the blank values from the key, the key itself would take up less space on the disk. In the modern era, disk space became plentiful, and cheap, so this use case is not used much anymore (other than as an historical artifact in the program.) This use-case is less applicable to SQL.

If this was the reason for the option being set, then it can simply be removed from the dictionary. The key can then be created as normal, and can include all values.  The program can simply be used as-is, however it may be necessary to add, or alter, the prop:Filter when using the key to exclude rows with zero or blank values.

Not-Blank-Unique

A second common use case pairs the Exclude Empty Keys option with the Require Unique Value option. In this case the Tospeed driver enforces uniqueness on the field IF a value is entered. < a href="#excludeemptykeysnote1">[1] So, for example, a phone number must be unique (if one is entered) but can be left blank if unknown (and no uniqueness check is done if the field is blank.)

This matches the SQL concept of a Partial Index. Most databases [2] support Partial Indexes, although some drivers may use an alternative strategy to achieve the effect. If the program is using the key in this way, then it can simply carry on using it like this, as long as the index is created correctly. (using the CREATE statement will create it correctly.)

Note 1: It's worth explicitly pointing out that if the KEY is not Case Sensitive, then the uniqueness is not case sensitive. Meaning that if a product a/123 exists then you cannot add A/123.

Note 2: Partial Indexes are not supported in MySql or Maria / DB. Firebird requires version 4.0 or later. SQLite requires version 3.8.0 or later. MS SQL Server supports them (since SQL Server 2008) , but they are called Filtered Indexes, and they cannot be unique. Where not supported alternative techniques can be employed by the driver to achieve the same effect.

EXISTS

Exists(FILE pFile),LONG
Exists(FILE pFile,STRING pTableName),LONG


The Clarion EXISTS command allows the program to query the disk system to see if a file exists. The object based drivers extend this command to check the database to see if the table exists in the database or not.

The pFile parameter does not need to be open or closed (either is fine.) If this is a SQL driver, the connection string (OWNER) of the table must be set before calling this command.

If the second parameter is omitted, or blank, then the pFile table is checked. If the second parameter is set then that table name (in the same database as pFile) is checked. The pTableName table does not need to match the structure of pFile.

The function returns true or false.

If the database request fails then Errorcode() is set to one of BadNameErr (45)NoDriverSupport (80), or BadOwnerErr (122) and the reply value is false.

EXCEPT

Except(FILE pFile)
Except(FILE pFile,STRING pFilter)

The EXCEPT command is used when creating Subsets. It allows results to be removed from the current result set.
It typically (although does not have to) follows a call to SUBSET.

Example

Create a list of customer email addresses, but exclude addresses listed in the Blocked table.

Set(Cus:NameKey)
Subset(Customer,Cus:Email)
Subset(Blocked,Blo:Email)
Except(Blocked)


Example

Create a list of Invoices for this month, for which a Credit Note has not been issued

Set(Inv:NumberKey)
Subset(CreditNote,'Cre:InvoiceId')
Union(Invoices,'Month(inv:date) = ' & Month(Today() & ' AND Year(inv:date) = ' & Year(Today())
Except(Creditnote)


Other related commands are UNION, UNIONALL and INTERSECT.

EXPORT

Export(FILE pFile,String pDosFileName,Long pFormat=driver:csv,Long pOptions=0)

The EXPORT command is used to create a text file from the database records. To import data from a text file see the IMPORT command.

Parameter Type Description
pFile FILE The label of the FILE to export.
pDosFileName STRING The full path and name of the file to save the export to. Note that this command should assume that the import is done natively on the database server, and the server needs access to write to the actual CSV file. So this (full path name) should be valid from the server-side.
pFormat LONG Defaults to Driver:CSV. This parameter determines the format of the output.
pOptions LONG Default to 0. Not all options are supported in all formats. Not all options are supported by all drivers. Consult driver documentation for details. Can be a combination of the following (added together);
driver:append - The data is appended to an existing file.
driver:header - A header line is added as the first line in the CSV file. If driver:append is set, and the file is not blank, then the header is not added. In other words, the header will only be added as the first line in the file.

The EXPORT is primed using a SET command. After the SET, instead of doing a NEXT, EXPORT is called instead. All the properties relevant to the SET command (including filtering and ordering) are relevant here.

For the Driver:CSV format the following additional properties are also in play;

Property Default Value Description
prop:Separator ',' The separator to use between fields in the row. The property can be up to 10 characters long. Null characters are not permitted.
prop:LineEndings '<13,10>' The separator to use between rows. Default value is Carriage-Return LineFeed.  The property can be up to 10 characters long. Null characters are not permitted.
prop:AlwaysQuote false If set to true, then string fields are always quoted, even if they do not contain reserved characters.
If false then only strings that contain reserved characters (the separator, line endings, and double quote) are quoted.
prop:NullString '^null^' Null values are written to the text file using this placeholder. The property can be up to 10 characters long. Null characters are not permitted. Null strings are NOT affected by the prop:AlwaysQuote property.


Example

Create a list of customer email addresses, but exclude addresses listed in the Blocked table.

Set(Cus:NameKey)
Subset(Customer,Cus:Email)
Subset(Blocked,Blo:Email)
Except(Blocked)


Example

Create a list of Invoices for this month, for which a Credit Note has not been issued

Set(Inv:NumberKey)
Subset(CreditNote,'Cre:InvoiceId')
Union(Invoices,'Month(inv:date) = ' & Month(Today() & ' AND Year(inv:date) = ' & Year(Today())
Except(Creditnote)


Other related commands are UNION, UNIONALL and INTERSECT.

prop:Explain (SQL)

The SET command allows you to create a result set from the data in the database, and then loop through that result set using the NEXT and PREVIOUS commands.

Typically a KEY (or INDEX) is used to define the sort order in the SET. However with the addition of the prop:filter and prop:order properties to the SET command, it is possible for the SET command to become quite complex. Ultimately the server will decide which indexes to use, or whether a full table scan is required. It can be useful to know which (if any) index it ended up using, and if a full table scan was needed, perhaps another index on the table could improve performance.

Explain mode is triggered by setting prop:Explain

Customers(prop:Explain}
or
Customers(prop:Explain} = true

This property is ONLY applied to the next SET command, and is then automatically cleared. The SET command will still function, so the use of this property should not affect the behavior of the actual code.

Obviously this property does some work, and so will slow down the query a little bit. Thus it should be used for development only, and not included in general code.

The format and destination of the output depends on the specific file driver. Consult the driver documentation for information on the expected results, where the results go, and the formatting of results.

See also: Performance Tips

Extended Name Attributes

The External Name attribute of a field, as declared in the code (ie, as set in the dictionary) is "visible" to the driver. For this reason extra desired functionality for a specific field can be passed to the driver in this way. The External name becomes a Pipe ( | ) separated list, which contains both the External Name and also Extended Attributes.

Clarion traditional drivers support some extended name attributes already. These can be applied at the field level to specific fields in the table declaration. This allows the developer more control over the way the driver treats individual fields. It's especially useful for SQL tables, although is used by some of the other drivers as well, where it is appropriate to do so. For a more in-depth discussion of Extended Name Attributes in general see ClarionHub.

Note that for traditional drivers, the Attribute name is Case Sensitive, and the spacing in the name attribute matters. In object based drivers the attribute is not case sensitive, and extra white-space can be included, or excluded, as you desire. Object based drivers support all the traditional attributes, as well as some additional attributes.

For more information on the traditional attributes, the Clarion Documentation is recommended.

Attribute Support Applies Description
BINARY Traditional & Object Based FIELD, BLOB Specifies that the field will hold binary data. Can be used on STRING fields. The driver will take this into account when CREATE the table.
CHECK Object Based FIELD See VALIDATE below.
CHECKFORNULL Traditional FIELD, BLOB In SQL, records with the NULL value in the WHERE clause are not included in the result set.
This switch alters WHERE clauses, so that records with NULL fields are included in the result set.
COLLATE Object Based FIELD, BLOB IN SQL when creating a table the COLLATE attribute determines how the field is compared to other fields. Are trailing spaces clipped? Is the field Case Sensitive? and so on.
DEFAULT=xxx
DEFAULT(xxx)
Object Based FIELD, BLOB When creating the table, specify that this field should be set to this default value. Do not use quotes. Usually used in conjunction with NOTNULL. Can be set to a blank string using just DEFAULT= or DEFAULT() with no value.
ISIDENTITY Object Based FIELD
Specifies that the field is a server-side auto-incrementing field. There may be some restrictions on this based on the specific driver, so check the driver documentation. The field will not be included in ADD, APPEND, PUT, UPSERT statements.
NOTNULL
Object Based FIELD, BLOB When creating the table, specify that this field should be set to NOT NULL as a constraint.
NOWHERE Traditional FIELD, BLOB  Some back ends do not allow certain data types in a WHERE clause, but they fail to advise the ODBC Accelerator Driver of this restriction. The NOWHERE switch lets you manually advise of the restriction when WATCH causes the driver to generate an error.
READONLY Traditional & Object Based FIELD, BLOB The field will not be included in ADD, APPEND, PUT, UPSERT statements. When ADD, PUT or UPSERT are called then readonly fields in the buffer are automatically updated to reflect the database value.
REQ Object Based FIELD Equivalent to NOTNULL and <> 0 or <> ''
SELECTNAME=xxx
UPDATENAME=yyy
INSERTNAME=zzz
Traditional FIELD, BLOB When Selecting, Updating, or Inserting, the specified name will be used as the field name instead of the External Name, or Label. Do not use quotes.
SQLTYPE=xxx
SQLType(xxx)
Object Based FIELD, BLOB When creating the table, specify that this field should be set to this type, regardless of the type Clarion would traditionally use. Do not use quotes.
(Each driver also has a custom form of this equate, see documentation for each driver.)
UUID4, UUID7, UUID8 Object Based FIELD
These fields should contain a UUID value when being added to the database. If the field is blank on an ADD or APPEND then a UUID value will be generated into the field for you. UUID's are an alternative to Auto Numbering.
> n, >= n
< n, <= n
<> n
Object Based FIELD Used for server-side validation of the field. Validation will be added when creating the table.
Ignored if the field is BOOLEAN.
VALIDATE(xxx) Object Based FIELD xxx is a SQL CHECK expression which is included in the table declaration when CREATE is called.
WATCH Traditional FIELD, BLOB If set at the field level, then only the fields with WATCH will be checked when writing a record with a PUT.

External Key Names

Most databases store key names external to the tables in the database. In other words all the key names are stored internally in a table, database wide, and thus the key names need to be unique.

Traditionally this means that Clarion developers have got into the habit of putting unique external names against keys in the dictionary. Typically (but not necessarily) by merging the prefix with the key name. This, however, would be the wrong thing to do. It is wrong for two reasons;

Firstly, it's unnecessary. The drivers automatically create a unique name if one is not supplied. For traditional drivers it consists of prefix_keylabel. So adding this as the external name is redundant. Object based drivers use tablename_keyname.[1]

Secondly, and more importantly, though it breaks ALIAS functionality when the database contains two tables of the same structure.

An ALIAS is a dictionary approach to duplicating a table in the Clarion dictionary. It is primarily used for two use cases; because multiple record buffers into the same table are required, or because multiple tables with the same structure, (or the same table in multiple databases) need to be accessed at the same time.

If the external key name is "fixed" then the ALIAS cannot CREATE a clone of the table because the key names will then be non-unique. Leaving the External Key Name blank will avoid this problem because then the prefix of the ALIAS will come into play (which will then result in a unique key name.)

The External Key Name is not in fact used in program code anywhere. Code makes use of the key label, so the name is irrelevant. Thus leaving the driver to generate unique names is the best approach.

Setting the External Key Name explicitly in the dictionary has implications for the CREATE command (as mentioned above) as well as the COPY and MERGE commands, which may create "clones" of the table.

Note 1: Object based drivers use the longer name because this allows the same FILE in the program to create multiple tables in the database which have the same structure, but different table names. Since the same FILE is used the prefix would be common, thus if the prefix was used in the key name, the key name would not be unique.

FROMQUEUE / TOQUEUE

FromQueue(FILE pFile, QUEUE pQueue)
ToQueue(FILE pFile, QUEUE pQueue)

There are situations where cloning a file structure as a local queue can be advantageous. These new commands exist as a convenience to make it somewhat easier to move data between queues and tables. While it is possible to write short loops to move data between a queue and a table, the loop can be as simple, or as complicated as you need it to be. These built-in commands are very simple, limited in functionality, and do minimal error checking. If nothing else though their code serves as a foundation for more complex code which you may need in more complicated situations.

First, the Queue must be created in a very specific way;

SomeQueueName   Queue(pre:Record),Prefix(SomePrefix)
! additional fields go here if desired
                End


The prefix is required since the field names are the same as the table. Declaring it using the pre:Record means that the Queue is declared with the same data structure as the File record. If the file record changes, then the queue will as well.

Note that since BLOB and MEMO fields are outside the Record structure, these commands cannot be used with BLOB and MEMO fields.

Using TOQUEUE primes the contents of the queue from the table using a SET / NEXT loop. All the properties which can be set for use by the SET command are therefore in scope, and can be set before the call to TOQUEUE. These include prop:Project and prop:Filter. Since (short of running out of RAM) Queues just work, this command is not expected to return any errors. However if the file is closed, or the SET command generates an error, then that error will be set, and the command will fail.

Calling FROMQUEUE writes the data in the queue to the table. It uses the UPSERT command, thus adding new records, or updating existing records. Again, prop:Project is in scope and can be used to do partial-record writes. BLOB and MEMO fields are left unchanged if the record is being updated.

It is not necessary to call TOQUEUE before FROMQUEUE. When doing bulk adds to a table doing that by filling the queue, then calling FROMQUEUE is a fast way to populate the table. If not already inside a transaction then a transaction around the bulk insert (or update) will be done automatically.

The queue can be declared as above, with additional fields added to the queue. These additonal fields will not be used by the FROMQUEUE or TOQUEUE commands.

The moving of data between the file record, and the queue buffer, is a simple memory copy. No attempt is made to assign the data on a field-level basis. Declaring the queue without using the pre:record, but using just fields, will need to be done very carefully to avoid field mismatches.

Example

CustomerQueue   Queue(Cus:Record),Prefix(CusQ)
                End

  code
  Open(Customers,42h)
  ! Load the queue with any number of records
  FromQueue(Customers,CustomerQueue) ! the queue records are merged into the table with an UPSERT


Another Example

EmployeeQueue   Queue(Emp:Record),Prefix(EmpQ)
                End

  Code
  Open(Employees,42h)
  Set(Emp:NameKey)
  Employees{prop:filter} = 'Emp:Age > 50'
  ToQueue(Employees,EmployeeQueue)
  ! do what you like on the queue here
  FromQueue(Employees,EmployeeQueue)



FLUSH (SQL)

For SQL drivers, when doing a SET/NEXT style loop, a SELECT statement is sent to the server and the results are fetched with the NEXT and PREVIOUS commands.

When breaking out of the loop, in most cases, that result set is no longer needed. In traditional drivers this information is only passed through to the driver when the FILE (or VIEW) is closed. Object based drivers support the FLUSH command. This tells the driver (and, if necessary, the database) that the result set is no longer needed, and can be flushed from memory.

Along with the use of the prop:Limit property, this is the most useful thing you can do after a SET/NEXT loop in order to improve performance, not just of the server, but of the program as well.

FLUSH cannot be called if a transaction is active. Doing so will set the error code to 56 Logout Already Active.

But wait, there's more.

If STREAM has been called to mark the start of a block of writes, then FLUSH closes the stream and allows the database to continue with normal operations. This means that the performance gains from using STREAM and FLUSH with ISAM files is translated to similar performance gains in SQL.

GETRANDOM

The object based drivers introduce GETRANDOM as a new command. You will likely never need it. However when you do come across a need for it, then it's a very useful command to have lying around.

GetRandom(FILE pFile)
GetRandom(FILE pFile, String pFilter)

The GETRANDOM function works the same as a GET, except that it doesn't get a specific record from the table, but rather a random record. This is useful when say allocating an employee (randomly) to a bug, selecting a customer (randomly) to win a prize, doing random audits and those sorts of places where "just get a record by random" is a good solution.

If passed without a filter, then the command will prime the record buffer with any record from the table. If passed with a filter then only those records which pass the filter will be available to be chosen.

Example

Select an employee, who has the status "available", to be allocated to this task;

GetRandom(Customer)
GetRandom(Employee,'emp:Available = 1')


If no record is available then the error 33 Record Not Available is set.

IMPORT

Import(File pFile, String pDosFileName,Long pFormat, Long pOptions=0)

Not Working Yet


The IMPORT command is used to import text files into the database. For exporting a table to CSV see the EXPORT command.

Parameter Type Description
pFile FILE The label of the FILE to import into.
pDosFileName STRING The full path and name of the file to import from. Note that this command should assume that the import is done natively on the database server, and the server needs access to the actual CSV file.So this (full path name) should be valid from the server-side[1].
pFormat LONG Defaults to Driver:CSV. This parameter determines the format of the input.
pOptions LONG Default to 0. Not all options are supported in all formats. Not all options are supported by all drivers. Consult driver documentation for details. See below for possible options.

The formats and options supported may depend on the specific database driver. The individual driver documentation should be checked for any deviations from this general information.

Parameter Supported Values Description
pFormat driver:csv The format of the file being loaded is a CSV (common separated value) file.
pOptions driver:header The first row in the text file is a header file, and should not be imported.

driver:upsert The Import will be done using the upsert methodology. This means that if the row already exists in the database it is updated to the values in the CSV file. If this is not used, then the add methodology command will be used - which in turn means that rows that already exist in the database will be left alone, and the imported row will be ignored.

Note 1: For SQLite and InMemory drivers the currently running program hosts the server, so the Server access rights and paths are the same as the programs access rights and paths.

CSV

The import file can be in CSV format.When parsing this format the following properties may be used. Not all drivers support all properties.

Property Default Description
prop:LineEndings CRLF
'<13,10>'
The separator which separates each line in the CSV file
prop:Separator
comma
','
The separator between each field in the line.

INTERSECT

Intersect(FILE pFile)
Intersect(FILE pFile,STRING pFilter)

The INTERSECT command is used when creating Subsets. It allows results to be removed from the current result set if they do not appear in this set as well.
It typically (although does not have to) follows a call to SUBSET.

Example

Find all customers that purchased NetTalk and StringTheory

Set(Cus:NameKey)
Subset(Orders,ord:CustomerId)
Union(Orders('Ord:ProductId = ''NetTalk''')
Union(Orders('Ord:ProductId = ''StringTheory''')


Example

Find all Products that have at least one sale every year for the last five years.

Set(Products)
Subset(Orders,ord:ProductId)
Loop y = Year(Today()) - 5) to Year(today())  -1
  Intersect(Orders,'YEAR(ord:date) = ' & y)
End


Other related commands are UNION, UNIONALL and EXCEPT.

prop:Json

Traditional driver do not support prop:Json .

Object based drivers return the text version of the field from the current record buffer, as a json encoded value. (See prop:Value if you want the raw memory value and prop:Text if you want the unadorned text value).  This property is especially useful if the field is an array field. If the field is not an array then a single-element-JSON-array is returned.

If the field is an array then a JSON array is returned, with each element in the array corresponding to an array value.

DATE and TIME fields (but not LONG fields masquerading as date and time) are formatted into a human readable date (@D010- by default) and time (@t04) and returned as JSON strings.  Time values are formatted as "+1" (meaning that 1 is added to the value before formatting.) This behavior can be altered by the driver so see your specific driver docs for non-standard behavior. (For example, the SQLite driver formats the time with added milliseconds - ie hh:mm:ss.ccc. )

For Example;

j = customers{prop:Json,3}

Keys, Indexes, Run time Indexes and Sort Orders.

Clarion supports flat-file (ASCII, BASIC, DOS) drivers, ISAM (TPS, DAT, BTrieve) drivers and SQL drivers. Clarion itself defines four different key types in the dictionary. Understanding how these four types are used in your program, and how they map to a SQL engine is important.

Clarion Term ISAM SQL Term SQL
Key A sort order that is kept up to date as records are added to the table. Primary Key, Unique Key, Foreign Key, Index These are kept up to date as the data is altered.
Static Index A sort order that is not kept up to date as records are added, but is constructed on an ad-hoc basis using the BUILD command before it is used. SQL has no direct equivalent of this (although it can be mimicked in some database engines.) For purposes of the SQL File Drivers, Static Indexes in the FILE structure are treated the same as Keys. In other words, from the SQL point of view, A Static Index is just another Key.
Run time Index A sort order that is undefined at compile time, but can be defined at run time. This allows for different users to define different sort orders. Run time Indexes have no real place in SQL. If you wish for additional indexes (keys) then you can simply add them at the database level (or via a CREATEKEY command.)
Order A sort order that is defined in the dictionary, but which is not included in the FILE structure at all, and so does not appear in the data file. It is used only at the template level as a simple way of collecting relevant information for the templates.

While keys are always discussed in terms of the Order of records, they are far more important when it comes to Filtering records. For optimal read performance it is important to have a key in your table which matches the filter you are applying to a table. The order of the result set, after the filter is applies is of secondary importance.

It is not necessary for the list of keys and indexes in your dictionary and in the SQL database to match. It is possible to define keys in your dictionary that are not actually in the database, and it's possible to have indexes in your database which improve performance, even if those indexes are not declared in your FILE structure.

prop:Label

File{prop:Label,n} can be used to get the label of a field, memo, blob or key in a table. Traditional drivers however have no way to get the label of the FILE itself. (The name of the table can be returned by the NAME command,  but there is no way to get the label.)

Object based drivers attempt to close this hole a bit by allowing File{prop:Label,0} to return something other than a blank string.

If not explicitly set (see below) it returns an upper case label for the table. This is the Full Path Name of the table, as it was set when the file structure was first touched. If the table has no path name then it will be the first 8 characters of the Label.

The best way to get a valid value here is to set this property in code, and then it can be retrieved later on. It can be set with;

table{prop:Label} = 'Whatever'


Note: Setting prop:Label affects ALL instances of this structure, on all threads, for the life of the program 


Labels are always converted to uppercase by the driver. So after the above code;

s = table{prop:Label,0} or s = table{prop:Label} will return 'WHATEVER'.

prop:Limit, prop:Offset & prop:Paging

When querying a database, the server builds a result set, and then supplies that result set to the program. The result set is typically constrained by using the SET command, along with the prop:Filter.

Under the hood, the drivers (by default) make use of a paging system which allows the server to build the results one page at a time. This reduces load on the server. The size of the page is stored in prop:FetchSize and can be set by the BUFFER command or by directly setting the property. If not set the default value is used (at the time of writing this value was 30, but it might change.)

All properties including prop:Limit and prop:Offset should be set before the call to SET .

For some queries using pages is inefficient. If you know the result set will be a reasonable size (ie will fit into memory) and you know you will be using all (or most of) the records in the set, or if the combination of the filter and order forces a full table scan, then it can be more efficient to query the database once, and get the whole result set. In that case you can manually override the paging by setting

file{prop:Paging} = false

If you would prefer to do your own paging, then you can bypass the SET/NEXT paging mechanism by setting your own prop:Limit and prop:Offset[1] values. The driver will treat this as a single request, and apply your values rather than the built-in paging values. Setting the prop:Offset property only has effect if you set the prop:Limit property as well. Setting prop:Limit to -1 allows the prop:Offset to be used with "no limit".

file{prop:Limit} = 1000
file{prop:Offset} = 2000


In other situations you may have a firm idea of the number of records you want to retrieve. For example you may be selecting the top ten sales performers for the year, or the 10 biggest invoices and so on. In that case setting a specific limit allows the server to optimize the request for just the data you are looking for.

Note 1: Note that paging using prop:Offset has limitations on tables which are dynamic - ie where records are being added or deleted quickly. In a dynamic situation you may find better results using POSITION and RESET to move from one page to the next. prop:Offset though is particularly useful when multiple pages are handled at the same time, and the next page is required before the end of the current page is reached.

Logging

A key technique to debugging data access, or understanding how (and when) the driver is being used, is to enable logging. When logging is turned on the flow of commands and data through the driver becomes visible, and can lead to a greater understanding of what is happening. Not surprisingly, over the years, this feature has expanded in the traditional drivers and it can be difficult to know what to do and when.

Trace.Exe

Any discussion of driver logging starts with the Trace.Exe utility, which is located in the \Clarion\Bin folder. This program sets logging settings at the driver level (in other words it applies to all tables using a specific driver.) It also applies to ALL programs using that driver.
It works by setting settings in an INI file. The default INI file file is in the %AppData%\SoftVelocity\Clarion\Drivers.ini file.
It supports both traditional drivers, and object based drivers.

Since it is run outside your program, logging can be turned on and off without changing, or recompiling your code. While useful, this can have security (and performance) implications. For this reason, in object based drivers, support for the INI file settings, and hence for Trace.Exe is OFF by default. It can be turned on by setting system{prop:TraceFile} = 'default' , or by setting system{prop:TraceFile} to a specific other INI name.

If supported, the settings are read by the program when the driver is first accessed. So if changes are made to the settings in Trace.Exe, your program will need to be restarted in order for the new settings to take effect.

It is possible for your program to specify an alternate INI file location (system{prop:TraceFile}) or to set the settings to be read from the Windows Registry instead (System{prop:TraceKey}). If these are set then the Trace.Exe program has no effect, and the settings have to be set manually. These settings apply to ALL drivers.

It is also possible to suppress global logging by setting

System{prop:DriverTracing} = 0

This overrides the settings in system{prop:TraceFile} and System{prop:TraceKey} and turns off all driver wide logging. However this does not affect specific FILE or VIEW tracing as discussed later. To take effect this property MUST be set before any file driver is initialized. In other words it only applies at the point where the INI, or Registry, settings are read.

Driver-Wide Settings

As discussed above, global settings can be set using the Trace.Exe program. However that only works if the Drivers.ini file is in the default location, and has that name. It's possible to set the name, and location of the INI file using system{prop:TraceFile}. Alternatively the settings can be stored in the Windows Registry using system{prop:TraceKey}. For example;

system{prop:TraceFile} = 'c:\log\Settings.Ini'
system{prop:TraceKey} = 'HKEY_LOCAL_MACHINE\SOFTWARE\DriverKit'

If both system{prop:TraceFile}and system{prop:TraceKey}are set then the INI file is ignored, and the registry settings are used.

For object-based drivers either system{prop:TraceFile} or system{prop:TraceKey} MUST be set to something for global Driver-Wide logging to take place. Frankly, this approach to logging is not recommended for security, performance, and the sheer size of the log file reasons.

The supported setting names are as follows;

Setting Type Matching Code Property below
BindData Boolean/REG_DWORD PROP:LogBind
Details Boolean/REG_DWORD prop:Details
Profile Boolean/REG_DWORD prop:LogClarion
Trace Boolean/REG_DWORD prop:LogSQL
TraceFile String/REG_SZ prop:Profile

File Settings

A better, and safer, way to log driver interactions is to log information for a specific file, for a specific block of code. This reduces the impact on performance, limits the security risk to the area you are logging, and results in a smaller (and hence easier to understand) log file. To simplify things even more, two new commands have been added; LOGGINGON and LOGGINGOFF. Using these it's possible to quickly enable, and disable logging for a file. In addition to these commands, the following properties are also supported;

Property Behavior
prop:LogBind In SQL the commands and the data are separate (most of the time). This is known as parameterization.
If set to true then then the values of fields which are "bound" to the SQL command are shown in the log.
prop:LogClarion If set to true then the Clarion commands which triggered the SQL are included in the log.
prop:LogData
prop:Details
If set to true then the contents of the record buffer, at the time of the Driver command, is included in the log.
prop:LogExplain If set to true then the SQL engine will be asked to provide a Query Plan for SET / NEXT loops. This plan is then included in the log.
prop:Log Allows a custom line to be injected into the log at this point. Recommend using LOG command instead.
prop:LogSQL If set to true then this includes SQL statements in the log file.
Prop:Profile
Prop:Logfile
If set (to a file name), this toggles logging on. All file calls and errors returned by the file driver are logged. Assigning a filename to PROP:Profile turns on logging, while assigning an empty string ('') turns off logging. Querying this property returns the name of the current logfile, and an empty string ('') if profiling is turned off.

For ad-hoc logging another two commands have been added.

LOG allows you to inject custom text directly into the log file (or, if no log file is set, it sends the text to Debugview++).

LOGBUFFER does the same thing, but logs the current contents of the buffer. If no log file is set this is also sent to Debugview++. This means, that at any point in the code, the current contents of the buffer can be observed simply by calling LOGBUFFER. For example;

LogBuffer(Customers)

Because the log file can be appended by multiple users, on multiple workstations, and by multiple threads, if the log is being sent to a file then  a large amount of information is included in each line of the log.

A typical line in the log looks something like this;

[SQLite2][607][HULK][Gerald][4840][1][ 4/03/25][ 7:21:37][][sqlput2.sqlite][Ident91][66][DATA]  IDE91:NAME | Brian

More Generally, the information is as follows;
[DriverName][Object ID][Computer Name][Windows User][Process ID][Thread][Date][Time][Server][Database][Table name][Open Status][type] data

Type is one of
[CLAR] : Clarion Code statement
[SQL ] : SQL Statement sent to the database
[BIND] : A binding of a SQL parameter to a specific value
[DATA] : The contents of a field in the record buffer.
[EXPN] : The Query Plan as returned by the server.

LOG

Log(FILE pFile, String pText)
Log(VIEW pView, String pText)

This performs the same function as setting prop:Log (but with tidier syntax). It writes to the currently open log file (as opened by LOGGINGON, or prop:profile). If no file is currently open, then it writes it to Debugview++.

Example

Log(Customers,'Calculating Tax Advantages here')

See Also: Logging

LOGBUFFER

LogBuffer(FILE pFile)
LogBuffer(VIEW pView)

If called on a FILE then all the fields (including blobs and memos) for the file structure are sent to the log.
When called on a  VIEW then all the fields (including blobs and memos) for all the FILE structures in the VIEW, are sent to the log.
Neither form of this command applies prop:Project - meaning that all the fields are included in the output, regardless of whether those fields are in the VIEW or not.

If no log file is active then the contents of the structure are sent to Debugview++.

This means that at any point in your code, while you are debugging, you can inspect the current contents of the FILE or VIEW structure simply by calling LOGBUFFER.

Example

LogBuffer(Customers)
LogBuffer(CustomersView)


See Also: Logging

LOGGINGOFF

LoggingOff(FILE pFile)
LoggingOff(VIEW pView)

Ends logging which was started with a call to LOGGINGON (or by a call to prop:Profile).

Example

LoggingOn(Products)
! some code goes here
LoggingOff(Products)


See Also: Logging

LOGGINGON

LoggingOn(FILE pFile,<String pFileName>,Long pOptions=0)
LoggingOn(FILE pFile,Long pOptions=0)
LoggingOn(VIEW pView,<String pFileName>,Long pOptions=0)
LoggingOn(VIEW pView,Long pOptions=0)

This command turns on Logging for a file. This logs commands into a text file, or Debugview++.

Parameter Type Description
pFile / pView File The label of the FILE or VIEW to log. All commands related to this FILE or VIEW will be added to the log file. Multiple FILE and VIEW structures can write to the same log file.
pFileName String The name of the text file to log to. If the file already exists then the new log will be added to it.
If the parameter is omitted, or is a blank string, then the output will be sent to Debugview++.
The filename cannot be a numeric value.
pOptions Long One or more of the following equates, added together. If this parameter is omitted, or 0, then ALL the options,except driver:LogExplain,  default to true.

driver:LogSQL : SQL commands, as sent to the database, are included in the log. They are identified with rows including [SQL ].
driver:LogClarion : Clarion commands, as written in the code, are sent to the log. They are identified as rows including [CLAR].
driver:LogData : The contents of the record buffer are sent to the log. These are identified as rows including [DATA].
driver:LogBind  : Parameterized values bound to a SQL statement are included in the log. These are identified with [BIND].
driver:LogExplain : When doing a SELECT statement, caused by a SET/NEXT loop, the Query Plan (as explained by the database engine) is included in the log. These lines are identified with [EXPN].


For example;

LoggingOn(Customers)
Get(Customers,cus:PrimaryKey)
LoggingOff(Customers)


Or

LoggingOn(Customers,'Customers.Log',driver:LogSQL + driver:LogExplain)
Set(cus:PrimaryKey)
Loop
  Next(Customers)
  Break
End
LoggingOff(Customers)


See Also: Logging

LOGOUT

For technical reasons, if an error occurs doing a call to LOGOUT, the program may GPF. To avoid this, and get an Errorcode  set use the LogoutConnection command instead.

LOGOUTCONNECTION

LogoutConnection(FILE pFile)

For more information on LOGOUTCONNECTION see Transactions.

Note that if LOGOUTCONNECTION is used, then ROLLBACK and COMMIT MUST take the primary table as a parameter.

Example

LogoutConnection(Customers)
! some writes happen here
Commit(Customers)

Memos

In Clarion each data field has a type; LONG, STRING and so on. These types are all fixed-length, the string has a declared length, and so on. Most databases also support BLOB (Binary Large Object) fields. These are fields of indeterminate length, and which can hold binary values (and which can grow very large.) In between these two extremes lies a hybrid field, the MEMO.

The MEMO originated in the Clarion DAT format (in Clarion Professional Developer, a DOS product). The DAT format supports a single MEMO field in the record, and does not support BLOB. To make it easier to port DOS programs to Windows the TOPSPEED file driver supported MEMO (as well as BLOB.) None of the traditional SQL drivers support the MEMO type, although all support the BLOB type.

That's a pity though because the MEMO can be used directly on a Window, as the source to an ENTRY or TEXT control, or as a column in a browse. The templates, and the language, mostly treat the MEMO as a large string.

When converting TOPSPEED programs to a SQL database it's common practice to change the MEMO into a CSTRING. While this makes the behavior of the type consistent (and hence the conversion simpler) it can lead to issues where the chosen database has a maximum record size. Converting to a BLOB is also possible, however a BLOB cannot be used as the source to Window controls, and so extra code has to be added there. It also cannot be added to the VIEW by the templates - again creating more work.

The object based drivers do support the MEMO type. How this is mapped to the database is dependent on the specific driver. This will make conversions easier, but more importantly makes it simpler to make use of the MEMO benefits in SQL-based applications.

There is however a catch. The Clarion Documentation states the following;

Memory is allocated for a MEMO field's buffer when the file is opened, and is de-allocated when the file is closed.

In practice that's not true. Both the TOPSPEED and CLARION drivers support the use of the MEMO field even if the table is closed. In other words memory is set aside for the MEMO when the thread starts, regardless of whether the table is ever opened or not.  This can have a significant impact on the memory assigned to each thread, especially if several large MEMO fields are in the dictionary.

With regard to blobs; the documentation for BLOB is equally clear - a BLOB is not available if the file is closed. In practice this is enforced, and if you access a BLOB field when the file is not open, the program will likely GPF.

Given that MEMO support has been added to the object based SQL drivers, there is a choice to be made. Either follow the documentation (which would be ideal for memory reasons) or follow the behavior of the DOS and CLARION driver (which would be ideal for compatibility reasons.) As is usually the case, compatibility wins and so the current behavior is preserved.

MERGE

Merge(FILE pFile,FILE pDestination)
Merge(FILE pFile,STRING pDestination)


MERGE is a new command. It allows the contents of one table to be pushed into another table. The destination table becomes a super-set of the source table. If a source record exists which matches any unique key in the destination table then the destination record is updated, not duplicated.

The FILE structure of the Source File is the list of fields that will be merged. Fields that exist in the SQL table, but do NOT appear in the FILE declaration will not be merged.

If the second parameter is a FILE then the parameter MUST be a FILE structure with an object based driver. The Destination table does not need to be open. If the Destination table has a Full Path Name variable or Owner variable then those MUST be set correctly before this call. If the destination table does not exist then it will be created.

If the second parameter is a STRING then it should name a table (in the same database) with a similar, or the same structure. If the table does not exist it will be created for you.

The tables are assumed to have the same, or similar, table declarations. The fields are matched by the column name in the database. Fields which are in the destination file, but not the source file are set to their (SQL) default value (which may be a NULL). Fields which are in the source file, but not the destination file, are not copied.

If the second parameter is a STRING it should be the fully qualified table name (ie of the form database.schema.tablename). If the destination table is in the same database then the name can be just the destination tablename. If this form of the function is used then the destination table MUST include all the fields, with the same names, as exist in the source FILE declaration.

If the destination table is in the same database, or (if supported by the database) a table in a different database on the same server, then the process is fast and no data transfer to the client is required. However if the destination table is a different driver or different server, then the process is somewhat slower as data has to flow first from the source database to the program, and from there to the destination server. The destination driver MUST support the UPSERT method.

This command is a one-way, upsert, merge. No deletes are performed on records from either table.

All supported properties should be set (or cleared) before the call to MERGE .
Calling CLEAR(file) or CLEARPROPS(file) will clear all the properties.

The MERGE command supports prop:Project , prop:Filter , prop:SQLFilter, prop:Limit, prop:Offset. If a subset of the fields are projected, then the destination table will set the remainder of the fields to be their default value, or Null.

All the properties are cleared at the end of the call to MERGE, regardless of success or failure.

If the MERGE command is called against a traditional driver then the Errorcode() is set to NoDriverSupport (80).

See Also COPY .

NOMEMO

From the Clarion Documentation;

In Clarion the NOMEMO statement arms "memoless" record retrieval for the next GET, REGET, NEXT, or PREVIOUS statement encountered. The following GET, REGET, NEXT, or PREVIOUS gets the record but does not get any associated MEMO or BLOB field(s) for the record. Generally, this speeds up access to the record when the contents of the MEMO or BLOB field(s) are not needed by the procedure.

None of the traditional SQL drivers support this command, and so this occasionally significant optimization is not available to SQL users.

Object based drivers though do support NOMEMO, even for SQL databases. With one slight modification to the original specification;
  1. For Object based drivers the NOMEMO works for GET , GETRANDOM and REGET as normal. It retrieves one record without memo or blobs, and the property is reset for subsequent reads.
  2. However for NEXT and PREVIOUS loops, the NOMEMO call applies to the whole result set as defined by the call to the  SET or RESET command, and is only cleared by the next SET or RESET command. In other words all NEXT and PREVIOUS calls inherit the NOMEMO option from the most recent SET or RESET command. Therefore if NOMEMO functionality is desired it must be called before SET is called in order for it to apply.
NOMEMO also works on the COPY and MERGE commands. It is cleared after these calls regardless of success or failure. Calling NOMEMO(file) before calling COPY or MERGE will result in the BLOB and MEMO fields not being copied, or merged. In the case of a MERGE existing BLOB fields in the destination table are unaltered.

If a read is followed by a write, then the write will inherit the NoMemo state of the read. In other words, if a record is read with NOMEMO active, and then it is PUT, then the BLOB and MEMO fields are not included in the PUT.

If called before the SEARCH command, then the Blob and Memo fields are included in the search, however the result set returned by the NEXT loop following the SEARCH will not include MEMOs or BLOBs. As above, as described by the SET / NEXT loop, it is not cleared automatically.

NOMEMO will cleared by a call to CLEAR(File) or CLEARPROPS(File).

Prop:NotDistinct

The opposite of DISTINCT result sets, are sets where only records that are duplicated are included.

The value in this property represents the number of times the row is duplicated. So, to find all rows with 2 or more instances set this property to 2. For all rows with 3 or more instances set it to 3, and so on. To turn the property off, set it to 0. Setting it to 1 has no meaning, and has the same effect as setting it to 2.

This property is not reset to 0 except on a OPEN , CLOSE , CLEAR and CLEARPROPS.  If you wish to re-use the FILE with this property off, then you must set it to 0 (if you previously set it to a value).

Rows in the table are unique. So retrieving all the fields in a row will mean that prop:NotDistinct will have no meaning (no rows  will be fetched.) Therefore, in the FILE context, prop:NotDistinct is only useful if you are using prop:Project to retrieve only a subset of columns, and those columns do not include all the fields in any unique index.

It is not necessary to set prop:Paging to false - when prop:Distinct or prop:NotDistinct are used, then paging is automatically disabled.

Since there is no row-identifying field in the result set, the results of this request are automatically set to be read only.

customers{prop:NotDistinct} = 2
customers{prop:Project} = 'Cus:Lastname,Cus:Age'
Set(Cus:NameKey)
Loop ...


The result set will contain one of each record (it doesn't return duplicated rows) but all the rows in the result set exist more than once in the table. As with prop:Distinct, the result set does not include row identifiers, however from this result set you can do a select to fetch the rows with those values.

If the prop:NotDistinct is set, but the prop:Project is not set, then the prop:NotDistinct is ignored and the implicit prop:ReadOnly is not applied.

See also prop:Distinct.

Nulls

In the object based drivers most of the Nulls functionality is handled by the base class.

Known differences between the traditional driver handling of nulls, and the object based driver handling of nulls is discussed below.

Code Traditional Drivers Object Based Drivers
GetNulls
Returned a string with length (4 * number of fields).
Returns a string with length (1 * number of fields).
GetNulls String could not be manipulated directly String contains a + character for non-null fields, and a . character for null fields. This makes it possible to inspect the string to see all the null fields, and also to construct a string suitable for passing to SetNulls.

POINTER64

Pointer64(FILE pFile),BIGINT

The POINTER command returns a LONG, which limits the number to around 2.1 billion. This may not be enough (for example the DOS2 driver supports files greater than 2 Gigabytes). The POINTER64 command is a new command which returns a BIGINT variable (at the time of writing this, this is a REAL.)

If this command is called against a traditional driver then the regular POINTER command is called, and a value less than 2.148 billion is returned.

See also Bytes64 and Records64.

POSITION

The POSITION command returns a string, and that string contains enough information to "mark" where the record is. It can then be retrieved later, as a single record using REGET, or a SET / NEXT sequence can be restored, using RESET, to this position for further processing.

The actual string returned by POSITION varies by use-case and driver. No information about the contents, or length of the position string should be assumed. Generally speaking, when using Position(file) or Position(key) the string should be at least large enough to hold the Primary Key fields.

There are two forms of the command, POSITION(file) and POSITION(key). Traditional SQL drivers do not support POSITION(file) since most SQL engines do not support any kind of record numbering. However the object based drivers treat this case POSITION(PrimaryKey). So the POSITION(file) form is supported - it's the same as POSITION(PrimaryKey).

Since object based drivers require a primary key, and since primary keys are always unique, POSITION(AnyKey) is always equivalent to POSITION(PrimaryKey). In other words, POSITION always returns the position of the primary key. This implies that all forms of POSITION when called on a FILE, return the same value.

If prop:Project is set, and the read did not include the primary key fields, then POS will fail, returning an empty string . An error code 121 Invalid Project List will be set.

Note: Because the position string returned can contain null characters (chr(0)) the return value MUST be stored in a STRING, not a CSTRING.

Note: Although POSITION returns a common value, regardless of the key used, the key passed into RESET still matters. In traditional drivers this means that the Position(key) must match the Reset(key). In object based drivers POSITION returns a common value, so any key can be used in the RESET.

Note: This section specifically applies to using POSITION with a FILE. For use with a VIEW see here.

prop:Project

When reading a record from a file (using GET, NEXT or PREVIOUS), typically all the fields are read from the file into the record structure. In some cases though only a small number of fields are necessary. Reducing the number of fields fetched from the server can improve performance as less network traffic needs to flow over the wire. prop:Project is introduced by object based drivers to offer the programmer this level of control.

prop:Project is a comma separated list of field names. Either the Clarion Label (including the prefix) or the database column name can be used.

prop:Project is set in the usual way;

Tablename{prop:project} = 'cus:id, cus:name, cus:phone'

If the Primary Key fields are NOT included in the projected list then the read is automatically set into Read-Only mode, and the results read by the SET/NEXT loop cannot be PUT back into the database. Doing a PUT after a read-only read results in an Error Access Denied (5). The use of POSITION, and by implication REGET are also unavailable if the primary keys fields are not projected.

If a GROUP is projected, then all the fields inside the group are automatically projected.

If an OVER field is projected, the the over'd field is projected.

In addition to fields already in the table, it's possible to project calculated fields into local variables outside of the record structure using BindField.

Read

From a read perspective, prop:Project is supported by SET (which then affects the following NEXT and PREVIOUS calls), and GET  commands. Notably REGET is not affected by prop:Project - REGET always reads the whole record into the buffer.

Write

From a write perspective, prop:Project is supported by the ADD, APPEND, COPY, MERGE, PUT and  UPSERT  commands. For ADD, APPEND, COPY and MERGE when doing a partial write the other fields are set to the (SQL Side) default values, or null.

prop:Prefix

In traditional drivers prop:Prefix (aka prop:Pre) is a write-only property, used when constructing tables using the dynamic file driver.

In object based drivers it can be read and returns the PREFIX attribute of the table.

Primary Key

All tables in your dictionary should have a Primary Key. This is simply good database design, and is a requirement for most database engines. A Primary Key should be identified in the FILE structure by use of the PRIMARY attribute. This is typically set in the Dictionary Editor, for one of your keys. While the use of a Primary Key is not enforced by Clarion itself, it should be considered a basic requirement especially when using a SQL engine.

In the absence of a formally identified Primary Key, the object based drivers will treat the first key marked as Unique as the Primary key. However this approach may end up selecting a key that you do not expect. Since most engines fix the primary key when the table is CREATEd in the database this can have long-term consequences.

prop:Name

Unfortunately the Traditional driver behavior for prop:name is not consistent. Therefore the behavior of both traditional and object based drivers should be considered as driver dependent. And to make things more exciting, the values change once the FILE structure has been initialized[3] (on this thread, assuming the FILE is Threaded.)

The different known behaviors are documented here;

Before Initialization;

Type Driver Applies To Behavior Attributes
Traditional SQLite (maybe?) Field and Blob If the NAME is not set, then the LABEL (no prefix) is returned. Not included [1]
Traditional All Other SQL Field, Memo and Blob If the NAME is not set then a blank string is returned. Not Included [1]
Traditional All ISAM Field and Blob If the NAME is not set then a blank string is returned. Included
Object Based All Field and Blob If the NAME is not set then a blank string is returned. Not Included [2]

After Initialization;

Type Driver Applies To Behavior Attributes
Traditional CLARION Field (not memo) If the NAME is not set then a blank string is returned. If the name is set it is UPPER cased. Included (all spaces removed)
Traditional CLARION Memo If the NAME is not set then a blank string is returned. Included
Traditional All SQL Field and Blob If the NAME is not set, then the LABEL (no prefix) is returned. Not Included [1]
Traditional All ISAM Field, Memo and Blob If the NAME is not set then a blank string is returned. Included
Object Based All Field, Memo and Blob If the NAME is not set, then the LABEL (no prefix) is returned. Not Included [2]

Note 1: To get the attributes for Fields use the WHO command.

Note 2: To get the attributes for fields and blobs can use prop:attributes. For Fields WHO, can be used.

Note 3: The file is initialized on a call to OPEN (successful or not.) The file remains initialized after a CLOSE, so this is not the same as "The file must be Open". Other commands may also trigger initialization, but if there are they are not known at this time.

Note 4: Bonus tip - when called as just File{prop:name}, The CLARION driver returns the full path name once the table has been opened, even if the table has been closed. All the other ISAM drivers return the table name after it has been closed, not including the path.

[todo] discuss setprop:name, and the behavior in the new apps.

New Properties (and new uses for old properties)

Properties are a useful way to extend the functionality of the driver, without having to make use of the Object methods directly. For this reason a number of new properties are supported, and can be used using the regular property syntax. These properties only apply to object based drivers, if used against traditional drivers they are ignored.

Some of these properties exist already and are supported by the VIEW driver when applied to VIEWs. However traditional drivers may not support these properties when used in a FILE context.

Property Driver Applies To Commands Description
prop:AddOnly All FILE ADD, APPEND When ADDing records to a database, the primary key position of the record is stored so that you can follow up the ADD with a DELETE or PUT if necessary. In cases where you are adding a lot of records, and you know you won't be doing a PUT or DELETE, you can set file{prop:AddOnly} = true. This bypasses the record tracking, which saves some time. This property is reset by any read (GET, NEXT, PREVIOUS, RESET, as well as CLEAR and CLEARPROPS).  If a PUT or DELETE is done after an ADD (with this property on) then a Record Not Available (33) error is raised. See also prop:ReadOnly below.
prop:Affected SQL FILE / VIEW DELETE, PUT When changing, or deleting rows, the number of rows affected by the write will be stored in prop:affected. This does not include rows updated internally by the server via cascading relational writes. See prop:ChildrenAffected.
prop:Arrays All FILE
Returns 1 if he driver supports arrays. Returns a blank string if not.
prop:Attributes All FILE File Structure Returns the Extended Name Attributes for a field or blob. Attributes are included in the Name of a field in the file structure, after a Pipe ( | ) character.
prop:Bytes All FILE
This behaves the same as the BYTES command, but is not limited to returning a LONG. Use this in cases where the returned value may be greater then a LONG can hold (ie greater than approximately 2.1 billion). See, for example, the DOS2 drivers' support of files greater than 2 gigabytes in size. See also BYTES64.
prop:ChildrenAffected SQL FILE / VIEW DELETE, PUT Some databases return the number of related rows in other tables also affected by the write. This property may not be available in all drivers. Review driver documentation.
prop:ClearProps All SET / NEXT
Clears all properties mentioned in this list (unless otherwise noted.) Is equivalent to calling the CLEARPROPS command. Automatically called when doing a CLEAR(File).
prop:Database


Returns the DATABASE part of the current SQL connection string. See also prop:Server and prop:User.
prop:DatabaseName ALL FILE
Returns the database engine name. In other words something like SQLite, PostgreSQL, MySQL or whatever.
prop:Distinct SQL
FILE / VIEW SET Causes the result set to only contain distinct records. For more see Prop:Distinct.
prop:Explain SQL FILE / VIEW SET Gets feedback from the SQL database for a SQL command.  For more see prop:Explain. Unlike the other properties this one is reset to false immediately the SET command is called.
prop:FastPut SQL FILE / VIEW PUT Default value is false. If set to true, then a PUT statement will not reset read-only fields in the record buffer (from their corresponding database values in the database.) This performs the same option for a PUT that APPEND does for ADD.
prop:Filter SQL FILE / VIEW COPY, MERGE, SET
Causes the result set to be filtered. In the FILE context is especially useful to set the end of the SET/NEXT loop.
prop:Interface
All
FILE
All READs and WRITEs.
Allows custom field handling to be added to a table, without altering the shipping file driver. This is equivalent to calling SetCustomType. (Not cleared by ClearProps).
prop:GroupBy SQL FILE SET
prop:Having SQL FILE SET
prop:Json,FieldNumber ALL FILE
Returns the field as a JSON array. If the field is not an array, then the JSON is still a JSON array, just with a single element. Where necessary JSON encoding is used to encode JSON reserved characters. The field number is passed along with the property to determine which field is returned. Use negative field numbers to indicate Blobs and Memos.
somestring = Customers{prop:json,fieldnumber}
prop:Limit
SQL FILE / VIEW
COPY, MERGE, SET Allows a FILE or VIEW loop to be limited to a specific number of records. Can be used in a SET / NEXT loop. Setting this to -1 tells the driver to fetch all records. This property must be set to something for the Offset property to apply.
prop:NotDistinct



prop:OBD All FILE
Returns 1 if the file is using an Object Based Driver. Returns a blank string for traditional drivers.
prop:Object
All
FILE /
KEY / BLOB / VIEW

As the name suggests, object based drivers are built using classes, and each FILE  or VIEW structure gets an associated object assigned to it. It can be useful to access this object directly. This property returns a pointer to the DriverFileClass object. See Getting the Object for the FILE Structure for more.If used against a KEY or BLOB then the DriverFileClass object of the FILE is returned. (Not cleared by ClearProps).
prop:Offset
SQL FILE / VIEW COPY, MERGE, SET Allows the initial records in the result set to be skipped. This property is only applied if the LIMIT property is set. The LIMIT property can be set to -1 to return all records.
prop:Order SQL FILE / VIEW SET Causes the result set to be ordered. In the FILE context is especially useful for ad-hoc ordering of a SET/NEXT loop.
prop:OrderBy SQL FILE / VIEW SET
prop:Paging SQL FILE / VIEW SET When set to true (which is the default) then Files and Views fetch the records from the database one page at a time. In some cases though it's faster to fetch the whole result set in one request. Set prop:paging to false if this is the case.
prop:Pointer All
FILE

Functions the same as the POINTER command, but is not limited to returning a LONG. Use this in cases where the returned value may be greater then a LONG can hold (ie greater than approximately 2.1 billion). See, for example, the DOS2 drivers' support of files greater than 2 gigabytes in size. See also POINTER64. (Not cleared by ClearProps).
prop:Pool SQL FILE OPEN Allows more control over the polling of database connections. See Transactions.
prop:Project SQL FILE COPY, MERGE, SET Allows for partial record reads and writes.
prop:ReadOnly All FILE SET When reading records (like in a SET/NEXT loop) the driver needs to keep track of the record just loaded in case you want to do a PUT or DELETE on the record. This tracking takes up some time. To improve performance you can set file{prop:ReadOnly} = true in which case this tracking is bypassed. If an ADD, APPENDPUT or DELETE is done after a read (with this property on) then a, Access Denied (5) error is raised. See also prop:AddOnly above.

Prop:ReadOnly is not reset to false except on a OPEN , CLOSE , CLEAR and CLEARPROPS. So if you set it for a file, then you are responsible to unset it if you switch to using the file in read/write mode.
prop:Records All
FILE

Functions the same as the RECORDS command, but is not limited to returning a LONG. Use this in cases where the reply value may be greater then a LONG can hold (ie greater than approximately 2.1 billion). See also RECORDS64.
prop:RelationsOff SQL FILE SUSPENDRELATIONS
RESUMERELATIONS
If server side relationships have been suspended, for this connection, on this thread, then this property will return 1. Otherwise it will return a blank string.
prop:Server


Returns the SERVER part of the current SQL connection string. See also prop:User and prop:Database.
SQLFilter SQL FILE / VIEW COPY, MERGE, SET
SQLOrder SQL FILE / VIEW COPY, MERGE, SET
prop:User SQL FILE / VIEW
Returns the USER part of the current SQL connection string. See also prop:Server and prop:Database.
prop:Where SQL FILE / VIEW COPY, MERGE, SET This property is not set by the programmer, but is available to the programmer to inspect after a call to SET. This contains the combined WHERE clause as constructed by the class.
prop:WherePage SQL FILE / VIEW
This is the same as the prop:where statement, but includes the additional WHERE clauses used for the paging.

PUT

Using traditional SQL drivers, only the changed fields are sent to the server. In other words, not all the fields in the record are updated. This is more efficient, and it also protects other fields, that may have been changed in the row, by other programs (or threads) between the GET and PUT calls. The object based SQL drivers mimic this behavior.

In addition to this, the object based, SQL drivers support prop:Project. When reading the record, prop:Project lets you limit the fields that are being read from the table. This property is used again in the PUT. So not all the fields are tested for changes, only the fields that are in prop:Project, which typically means the fields which were read in the first place.

In the context of a PUT, the projected fields are the fields being projected into the server from your program.

It is possible to adjust prop:Project between the read and the write, which in turn affords the programmer even greater control over which fields are written back to the database. If you write back a field, which was not read, then the write will only happen on a field if the value of the field is different to the value before the record was read. So adding fields to prop:Project should be done with care.

Some care should be taken when using GET, REGET, NEXT and PREVIOUS. All these commands also support prop:Project. Changing the value of prop:Project between the read and the PUT should be done with caution. Typically you will not want to write back more fields than were read (although it's possible to do so.) The drivers do not alter the value in prop:Project after a read.

Equally, prop:Project is not cleared, or reset, after a PUT. For subsequent reads the value should be cleared or reset or it will be used again.

By default, if supported by the driver, when a PUT is done, all server-side fields (identity, and readonly fields) in the record are reset to their current server-side value. Use the prop:FastPut property to suppress this behavior.

See also: Add.

prop:Readonly

When a table is opened, access rights for the table are set. Performance and security are improved if the table is opened in read-only mode. However since tables are opened generically, this is seldom used.

Even if the table is opened in the default read-write mode, there are times when telling the driver that a block of code will be read-only has performance implications. For this reason prop:ReadOnly has been introduced in object based drivers.

Set the property to true or false. For example, setting it to true;

customers{prop:ReadOnly} = true

has the effect of telling the object that it will only be used in a read-only context (for now.)  While in this state calling ADD or APPEND will result in an Access Denied (5) error. Calling PUT or DELETE will result in a Record Not Available (33) error. Once the read-only moment has passed the property can be set back to false.

The property is set automatically on OPEN or SHARE if the file is opened in read-only access mode. If the file is opened in read-only mode (via the OPEN or SHARE commands) then attempting to set the property to false has no effect. If it's desirable to change the access mode after the table has been opened then see REOPEN.

RECORDS

Records(FILE pFile)
Records(FILE pFile, String pFilter),BIGINT

Records(FILE pFile, *? pField1, <*? pField2>, <*? pField3>, ..., <*? pField15>),BIGINT

Parameter Type Description
pFile File The label of the file to count.
pFilter String A filter expression which expresses the records to count.
pField1..pField15 Any Fields in the FILE record. Up to 15 fields can be entered here. The count will be filtered using the current values in these fields. These cannot be OVER or ARRAY fields.

The RECORDS command has been extended to allow the counting of more things than just the records in the table.Two possible approaches can be used; a filter expression, or a list of 1 to 15 fields with fixed values.

Example

Return the number of records with a date in January.

r = Records(Invoice,'Inv:Date >= Date(1,1,2024) and Inv:Date <= Date(1,31,2024)')

or

r = Records(Invoice,'Month(Inv:Date) = 1 and Year(Inv:Date) = 2024')

Return the number of Invoices allocated to Customer 5 on the 12th of January

Inv:Date = Date(1,12,2024)
Inv:Customer = 5
r = Records(Invoice,Inv:Date,Inv:Customer)


Note: OVER and ARRAY fields cannot be used in either the filter or as pField parameters.

RECORDS64

Records64(FILE pFile),BIGINT

The RECORDS command returns a LONG, which limits the number to around 2.147 billion. This may not be enough - many databases support more than 2 billion rows in a table.  The RECORDS64 command is a new command which returns a BIGINT variable (at the time of writing this, this is a REAL.)

If this command is called against a traditional driver then the regular RECORDS command is called, and a value less than 2.148 billion is returned. So this new command is backward compatible with existing drivers.

See also Bytes64 and Pointer64.

REGET

Reget(FILE pFile, String pPosition)
Reget(KEY pKey, String pPosition)

The REGET command reloads a row from the FILE based on the Position parameter. The Position is a string that has previously been populated with a call to the POSITION command.

The REGET command does NOT observe the prop:Project property. In other words it always loads all the fields in the record buffer. Calling REGET will clear the prop:Project property.

The REGET command does NOT support the length parameter. See DOS2 for more.

Traditional SQL drivers do not support Reget(File). Object based drivers treat this as Reget(PrimaryKey) so it is supported. Indeed since all calls to REGET (regardless of the key specified) are treated as Reget(PrimaryKey), they are effectively all equivalent.

See also : RESET .

Relationships

In many aspects good SQL advice is limited to the specific context where that advice works. It is critical to understand your own context well in order to make good database design decisions. And perhaps no area in database design is as contentious as relationships. They are enormously useful in some cases, but equally unhelpful in others.

From a database perspective relationships allow the developer to define connections between tables. This is primarily used for two reasons;
  1. To maintain data integrity (ie preventing "bad" data from being added to the database and
  2. To improve performance when updating primary key values, or deleting associated records.
For example, say there is an Invoices table, and each invoice contains multiple line items. If an invoice is deleted from the system, then it makes sense to cascade the delete to the child (LineItems) records. In some relationships this is desired. In others (for example, deleting a customer) you do not want to cascade the delete (deleting Invoices when deleting Customer would be bad.) And in some cases you would want to prevent a row (say the Customer) from being deleted if child records (Invoices) exist.

With traditional (and object based) drivers all of this can be done in the client program, and indeed Clarion is set up to support client-side data integrity in this way. But doing it on the client can be slow, especially if a lot of rows in different tables are updated. For example changing the Unique ID value for a Product might result in millions of LineItems rows needing to be updated. Object based drivers add the possibility of doing this work on the server, not the client.

Of course all magic comes at a price, and database relationships are no different. There are two areas where having a relationship comes at a cost;
  1. There's a performance penalty when adding or updating records. The fields in the relationship need to be checked, and naturally that comes at a cost.
  2. In distributed-data systems (think data on phones being synchronized to a SQL database) the order of synchronization suddenly becomes important. Parent tables have to be synchronized before child tables.
So choosing to use relationships, on the server side, is a question of balancing benefits in some cases with costs in others. There is no one right answer which will work in every situation.

Traditional drivers make no attempt to support server-side relationships, those are left to the developer to manage outside of the program. Object based drivers add a command ADDRELATION which allows the developer (or template) to inform the file driver of the desired relationships.

For some database drivers (most notably SQLite) this call should be made before the table is created. The relationship is baked into the table creation, and changing that afterwards can be "expensive". For most other databases, relationships can be created (and updated) on the fly.

Relationships consist of a parent table, a child table, action on update, action on delete, and the list of one or more linking field pairs. Up to 10 field pairs are supported.

Relationships make some table maintenance more difficult. For example, dropping a table is problematic if other tables are related to this table, and contain child data. The common technique of using REMOVE or EMPTY to empty a table, then repopulating it with some data fails because you may not be allowed to delete parent records, or worse, the delete cascades to the children (or the children are cleared.)

So in some cases, especially when table maintenance is being done, it's necessary to suspend and later on resume relationship checks. The new commands SUSPENDRELATIONS and RESUMERELATIONS have been added to give the programmer more control. Some databases do this globally, on a connection or session basis, or on a table by table basis. Be aware that if it's done globally then this could affect other threads in the program or other programs. Check your driver specific documentation to fully understand the implications of these commands. And, of course, use them only when necessary.

REOPEN

Reopen(FILE pFile,LONG pNewMode)

If the file is already open, then it can be reopened with different access rights.
This is an alternative to calling CLOSE and then OPEN with a new access mode parameter. This bypasses all the clean-up that usually occurs when doing a CLOSE or OPEN. Use this to change the access mode, but without affecting other properties or record buffers that may be set.

If the reopen is changing to a read-only access mode, then prop:readonly is set to true. If the reopen is changing to a read/write access mode then prop:readonly is left unchanged.

If the file is currently closed, then REOPEN behaves like a normal OPEN, except that the File Name property is not updated (if the File Name property is a variable.)

Example

Reopen(Customers,42h)
Customers{prop:readonly} = false

 
If the REOPEN command is called against a traditional driver then the Errorcode() is set to NoDriverSupport (80).

REMOVE

REMOVE(FILE pFile)

The REMOVE command for object based drivers is the same as for traditional drivers. It deletes the table from the database (in SQL terms it does a DROP TABLE.

There is however an additional consideration when using REMOVE in a SQL context, where server-side relationships are active. If you are using server-side relationships then the use of REMOVE can be damaging. If called when relationships are active then the defined On Delete behavior may be executed by the database. This includes cascading deletes, clearing of child fields, and so on. Indeed the call may fail for one, or more, rows if there is a Restrict limitation placed on deleting the row. This may not be intended behavior.

If the call to REMOVE is part of a wider process (to drop the table, and then recreate it) then strong consideration should be given to suspending relational updates (using SUSPENDRELATIONS).

See also the EMPTY command which has similar implications, and the section on Server Side Relationships in general.

RENAME

RENAME(FILE pFile, STRING pNewFileName)

The RENAME command is not supported by traditional SQL drivers. However object based drivers allow the RENAME command to be implemented by the driver. The following is the expected behavior when used by a SQL driver;

This command renames the table in the database. Note that only the table may be renamed. The keys may be renamed by the driver, or they may not (depending on what the database requires.)

The table can be renamed if it is OPEN however note that the Full Path Name setting for the table is not changed. Thus subsequent operations on the table may fail if the Full Path Name is not updated at the same time. (Unless the next command is a CREATE.)

If the destination table already exists, then an error Access Denied (5) is generated.

RESULTGET

RESULTGET(FILE pFile, long pRow)
RESULTGET(FILE pFile, BIGINT pRow, long pColumn)
RESULTGET(FILE pFile, BIGINT pRow, long pColumn, *Byte pNull)

RESULTGET is one of the functions that lets the program access the result set directly.

RESUMERELATIONS

RESUMERELATIONS(FILE pFile)

This ends a matching SUSPENDRELATIONS command. Calling CLOSE also automatically triggers the end of the SUSPENDRELATIONS command. If SUSPENDRELATIONS has not been called then this does nothing. If SUSPENDRELATIONS has  been called multiple times, then this resumes all of them. (In other words there is not a counter counting Suspends and Resumes.)

Note that most SUSPENDRELATIONS commands apply to a connection, or the entire database (consult your driver documentation for specifics.) However to work correctly it is necessary to call RESUMERELATIONSON for the same table as  SUSPENDRELATIONS.

Calling RESUMERELATIONS without a preceding SUSPENDRELATIONS is meaningless and harmless. No Error will be generated.

See also SUSPENDRELATIONS and Relationships.

ROLLBACK

ROLLBACK(FILE pFile, STRING pSavePoint)
ROLLBACK(FILE pFile)


If LOGOUTCONNECTION is used to start a transaction, then ROLLBACK MUST be called with the primary table. Calling ROLLBACK with no parameters in this situation will cause a GPF.

For more information on rolling back to a save point, see SAVEPOINT

Saving State

The templates (both ABC and Clarion) generate FILE structures as Global, usually threaded, structures. This has a number of advantages, but also means that there can be side-effects when a procedure is using the FILE structure, and calls another procedure that uses it as well.

To help with this problem Clarion has three commands ( GETSTATE, RESTORESTATE, FREESTATE) used to save the state of a FILE so that it can be restored later. This allows procedures (either the caller or the callee) to snapshot the state of the structure, and then return it to that state, thus ensuring that there are no side-effects.

For ISAM drivers this works well, and can be "perfect". However for SQL drivers there are limitations, precisely because another program is involved (the SQL server) and it does not necessarily have the ability to do the same. Fortunately these limitations seldom matter to the program.

It's also worth noting that saving a state is "expensive" in RAM terms. There's quite a lot of information (properties, result set and so on) which are stored. When you no longer need the state FREESTATE should be called. FREESTATE is not called automatically when CLOSE is called (because the state transcends the FILE open status.) It will be freed when the thread ends, but doing so when it is no longer needed is recommended.

SAVEPOINT

SAVEPOINT(FILE pFile, STRING pSavePoint)

SAVEPOINT is a new command that can be used inside a transaction (ie between a LOGOUT and a COMMIT). It places a "marker" (the pSavePoint parameter) into the transaction. This allows a ROLLBACK just to that save point, rather than all the way back to the beginning of the transaction.

For Example (this code is obviously highly abbreviated);

Logout(0,Invoices,LineItems) ![1]
Loop
  Add(Invoice)
  SavePoint('Invoice' & Inv:id)
  Loop
    Add(LineItem)
    If Errorcode()
      Rollback('Invoice' & Inv:id)
      Break
    End
  End
End
Commit()


Note 1: For SQL drivers actually listing all the tables is obsolete. See Transactions for more information on that.

SEARCH

SEARCH(FILE pFile, STRING pSearchString)
SEARCH(VIEW pView, STRING pSearchString)

Object based drivers introduce ranked Full Text Search features to the driver. This is done with the SEARCH command. This command is effectively a complex filter, and is applied between the SET command and the NEXT and PREVIOUS commands.

Result sets are ordered by rank - in other words the most relevant results are returned first.

Note that Full Text Search is not like INSTRING or LIKE. It's not matching a string of characters, but rather one or more whole words. Searching for the word data does not return a match for BackupData or DataManager - only for columns that contain the whole word data. The search is not case sensitive.

By default the full text search is created across all the string, pstring, cstring, memo and blob fields in the table structure. Fields marked as BINARY, UUID or BASE64 are excluded. Fields with a Custom Field Interface are excluded.

The syntax for the search string can be simple, but it can also be be made more specific using a well formed query. The following operators are supported;
Operator Description
OR , | The OR operator (or alternatively a pipe symbol can be used) returns a result set where either of the terms is present.
AND, +, & The AND operator (or alternatively a plus or ampersand symbol can be used) returns a result set where both of the terms is present.
NOT, -, !, ~ The NOT (or alternatively a minus, tilde or exclamation symbol) removes any row with this term from the result set.
"text" Double quotes are used to indicate phrases which need to be matched in the row.
( ) Parenthesis are used to indicate operator precedence.

All of the above is a formal way to say that the search string behaves the way you would expect. Here are some examples;

Return result sets containing the word RED or the word BLUE.

'red | blue'
'red or blue'


File Example

Set(Pro:NameKey)                             ! [1]
Products{prop:filter} = 'Pro:Active = 1'     ! [2]
Search(Products,'NetTalk')
Loop
  Next(Products)
  If Errorcode() then break;
  ! whatever
End


[1] : The result set will be order by relevance (known as the rank) first, and then the sort order specified by the SET will be applied after that.
[2] : The SEARCH is in addition to, not in place of, all the regular SET behaviors. So all the other properties that apply to the SET are still in play.

View Example

Find all the customers with the word INC in the record. The most relevant records are returned first.

Set(CustomersView)    
Search(CustomersView,'inc') 
Loop
  Next(CustomersView)
  If Errorcode() then break.
  ! whatever
End

Field Types

Fields in the row that are searched have to be text, not binary fields.

If you have a BLOB or MEMO in your dictionary, and that field is not marked as BINARY, then the driver will create it as a TEXT type field (or similar) when the table is created in the database. If the field is declared as a TEXT (or similar) then it can be searched using full text search.

BLOB (or STRING, PSTRING) fields that are marked as BINARY in the dictionary, and therefore created as a BLOB (or similar) in the database cannot be used in a full text search.

SEND

SEND(FILE pFile, STRING pCommandList)

The SEND command is used to send instructions, and set values to the file driver which control the way the driver behaves. For more information see Driver Options.

The command list is a space-separated list of commands, which may, or may not, need associated values.

If the SEND command is used without specifying a value, then the current value of the setting is returned and the value is unchanged. If a value is passed then the value after (attempting) to set the value is returned. Some databases will not accept a setting, so the setting, as used by the database, is returned. (In other words, if the value returned does not equal the value returned, then the option failed to set for some reason.)

The SEND command can accept a space-separated list of settings. If multiple settings are set at the same time then only the value of the last setting is returned.

Note: Because it accepts a space separated list care should be taken not to include extra spaces, such as before or after the = sign. If spaces in the name or value are necessary, then enclose the term in double quotes. For example;

Send(Customers,'/pragma_foreign_keys=ON')      ! good
Send(Customers,'/pragma_foreign_keys = ON')    ! bad
Send(Customers,'"/pragma_foreign_keys = ON"')  ! good

SETCUSTOMTYPE

[...]

SET, RESET, POSITION

SET(FILE pFile)
SET(FILE pFile, KEY pKey)
SET(FILE pFile, LONG pOffset)
SET(KEY pKey)
SET(KEY pKey, LONG pOffset)
SET(KEY pKey, KEY pKey)
SET(KEY pKey, KEY pKey, LONG pOffset)

The SET command in Clarion comes in seven different flavors. Fundamentally it breaks down into combinations of 3 parameters; For traditional drivers the OFFSET parameter is a LONG, and thus has a maximum value of 2.1 billion. If your OFFSET is, or may be, greater than this number then create a variable of type BIGINT, populate that with the larger number, and then call SET with that variable as the OFFSET parameter. Using this form will ensure that the whole BIGINT value is passed to the driver.  This is especially important when using the DOS2 driver on files larger than two gigabytes.

For example;

ptr  BIGINT
  code
  ptr = ! some number, perhaps > 2 gigs
  SET(Customers,Ptr)


The traditional SQL Drivers do not support all seven forms of the SET command. Typically they only support SET(key) and SET(key,key). The other forms of the SET command, especially setting any kind of OFFSET are not supported. Object based drivers however do support all these forms.

Traditional drivers do not support prop:Order or prop:SQLOrder when doing a SET on a File, object based drivers do.

The following points explain the interactions, and limitations of the object based drivers new additional features;

Reset

SET(FILE pFile, STRING pPosition)
SET(KEY pKey, STRING pPosition)

Despite the name, it's best to think of RESET as just another form of the SET command. It is possible to call RESET without calling SET, so using RESET does not require a  SET . Rather RESET simply adds another way of doing a SET, and in all respects behaves in exactly the same manner.

The initial values used by the SET command are not in the record buffer (as they are with a call to SET) but rather in the Position parameter passed into the RESET command. The Position parameter is the result of an earlier call to POSITION.

See also REGET.

Position

POSITION(FILE pFile)
POSITION(KEY pKey)

Reset is based on a value returned by POSITION.

In traditional drivers, POSITION in turn is based on the contents of the record buffer when it is called, and importantly on a specific key when it is called. In other words a key is chosen when calling POSITION and that same key needs to be used when calling the RESET command.

For example;

GET(file,5)  ! it doesn't matter how the record is fetched
p = POSITION(file:PrimaryKey)
RESET(file:PrimaryKey,p) ! works
RESET(file:OtherKey) ! fails


SQL object based drivers SET supports prop:Order and prop:SQLOrder. Since RESET should be considered as a form of SET, it makes sense that POSITION should also support prop:Order and prop:SQLOrder[1]. However the only way to do this is to have POSITION return the "whole contents of the record buffer" (which is what it does.)

In other words, the length of the string returned by the record buffer is now the size of the record, not some size determined by the Primary Key and selected key. While this requires more memory for the saved position, it has some beeficial side effects; POSITION and RESET are faster, and since the POSITION contains the whole position, the RESET can be done on any key, and is not limited to the key, or prop:Order set at the time of the call to POSITION.

Note 1: prop:SQLOrder allows for calculated fields to be included into the ORDER BY clause. Since these clauses are outside the record buffer they cannot be captured by POSITION. Hence if calculated fields are used in the prop:SQLOrder property then the RESET position may not match the actual record position, and thus in this case RESET cannot be used.

SET / NEXT Loop

From the earliest days of Clarion, embed code (and hand-code generally) has used loops to walk through a FILE, in a pattern known as the SET / NEXT Loop.

The basic pattern looks something like this;

Clear(Invoice)
Set(Inv:DateKey)
Loop
  Next(Invoice)
  If Errorcode() then break.
  ! do something
End


This code is ubiquitous and works well with ISAM files, but is not very SQL friendly. A slightly expanded form of this code follows the same approach;

Clear(Invoice)
Inv:Date = Date(1,1,2024)
Set(Inv:DateKey,Inv:DateKey)
Loop
  Next(Invoice)
  If Errorcode() then break.
  If inv:Date > Date(1,31,2024) then break.
  ! do something
End


In the above code all the invoices for January are fetched. Again this code is very fast in ISAM but also very inefficient in SQL. When converting a program to SQL it is usually recommended to change this code to use a VIEW. However with object based drivers it is possible to make a small tweak to make this a lot more efficient for SQL back ends.

To understand why, it's important to understand that VIEWs have two major efficiency benefits over a SET/ NEXT loop. Firstly they allow you to limit the number of fields populated with each iteration through the loop. If you are only going to use a few fields out of a lot of fields, then this is good to do. Secondly they allow the result set to be bounded by using a Filter.

Object based drivers add support for a number of old, and new, properties to the SET/NEXT loop which better informs the engine on the goals of the loop, and this results in code which is more efficient in the database.

All properties should be set (or cleared) before the call to SET . Unless otherwise specified properties are not cleared by the call to SET and must be manually cleared (if desired) before the next set. Changing these properties during a loop is not recommended (except where documented below.)

To ensure that properties set by previous calls do not affect the SET / NEXT loop, start the loop be calling ether Clear(file) or ClearProps(File).

Prop:ClearProps

Calling prop:clearprops sets all these properties back to their default value (null or false.) This is the same as calling ClearProps.

customers{prop:clearprops}

Prop:Explain

A SET command, along with all the properties described here, result in a SELECT statement being sent to the database.

If your loop runs slowly, it can be useful to see the way in which the databases uses the statement. And specifically if it is able to make use of keys on the table to best generate the result. prop:explain can be set, before the call to SET, to get information from the database regarding the query.

Prop:Explain is cleared automatically by a call to SET. The SET still functions, and the code runs as normal. The output is dependent on the driver.

For more information on this see Explain.

Prop:Filter

In the example code at the top of this section, a SELECT statement is generated and a Result Set is constructed by the server. Using the SET(key,key) line sets the starting point of the result set, but crucially there's no end point defined. So the server builds a result set of all the invoices since Jan 1, 2024 - whereas we are really only interested in the invoices actually in January. Letting the server in on that little bit of information allows the server to build a smaller result set.

Object based drivers allow for this by adding support for the prop:filter statement. So a small tweak to the code becomes;

Clear(Invoice)
Inv:Date = Date(1,1,2024)
Invoice{prop:filter} = 'Inv:Date <= Date(1,31,2024)'
Set(Inv:DateKey,Inv:DateKey)
Loop
  Next(Invoice)
  If Errorcode() then break.
  If inv:Date > Date(1,31,2024) then break. ! it doesn't matter if this line is here or not
  ! do something
End


This is a lot simpler, and quicker to do than refactoring the code completely to use a VIEW.

The code could also have been translated to this;

Clear(Invoice)
Invoice{prop:filter} = 'Inv:Date => Date(1,1,2024) AND Inv:Date <= Date(1,31,2024)'
Set(Inv:DateKey)
Loop
  Next(Invoice)
  If Errorcode() then break.
  ! do something
End


All the same rules that apply to View{prop:filter} apply here, and indeed File{prop:SQLFilter} is also supported as well.

If you use a SET(key,key) form of the SET statement, and also a Prop:Filter then the Prop:Filter will be appended to the key filter. So if the key is on say Inv:Date, Inv:Time, and you set Prop:order = 'Inv:Total > 0' then the result is filtered by Date, Time, Total.

If you use the SET(file) or Set(key) form of the SET statement then the prop:filter will be used alone (as there are no key fields.)

Various Clarion and SQL functions can be used in a filter. See View Filter Functions for more.

Prop:Limit

Another way to limit the result set generated by the SET / NEXT loop is to use the Prop:Limit setting. This is useful in those cases where the number of records you want is limited. For example, here's code to get the top ten invoices for the year;

Clear(Invoice)
Num = 0
Clear(Inv:Total,1)
Set(Inv:TotalKey,Inv:TotalKey) ! Total Key is descending highest to lowest
Loop
  Next(Invoice)
  If Errorcode() then break.
  If Num > 10 then break.
  If Inv:Date < Date(1,1,2024) or Inv:Date > (12,31,2024) then break.
  Num += 1
  ! do something 
End


Clearly we can use prop:Filter to optimize this, but prop:Limit will also be helpful;

ClearProps(Invoice)
Invoice{prop:filter} = 'Inv:Date => Date(1,1,2024) AND Inv:Date <= Date(1,31,2024)'
Invoice{prop:Limit} = 10
Set(Inv:TotalKey,Inv:TotalKey)
Loop
  Next(Invoice)
  If Errorcode() then break.
  ! do something 
End


Prop:Limit is used to limit the result set to a specific number of records.Use of Prop:Limit implies the same behavior as prop:Paging = false (the whole result is loaded into ram in one bite.)

Prop:Limit works for FILEs and VIEW's. It must be set (or cleared) before the call to SET. It is not reset automatically (except on an OPEN , CLOSE , CLEAR, CLEARPROPS, where it is reset to false.) Changing the value of this property to not-zero while inside a SET / NEXT loop will have the effect of terminating the SET/NEXT loop at the end of the current page. Changing the value of this property to zero inside the LOOP has no effect.

Prop:Offset

If you wish to skip a number of records at the start of the result set, then you can use prop:offset to do so. All the FilePointer forms of SET also implicitly set the prop:offset to the passed-in FilePointer value.

Prop:Order

In the example at the top there was helpfully a key (TotalKey) which was on the field we wanted to sort by, and also helpfully it was a descending key. In real-life we probably aren't that lucky. In which case you likely have already converted this code to using a VIEW (or worse, you are loading the whole thing into a Queue and then sorting the Queue.)

Fortunately though object based drivers support Prop:Order. So you can use an ad-hoc ordering scheme where you don't have a key.

ClearProps(Invoice)
Invoice{prop:filter} = 'Inv:Date => Date(1,1,2024) AND Inv:Date <= Date(1,31,2024)'
Invoice{prop:Limit} = 10
Invoice{prop:Order} = '-Inv:Total'
Set(Invoice)
Loop
  Next(Invoice)
  If Errorcode() then break.
  ! do something 
End


If you use a SET(key) form of the SET statement, and also a Prop:Order then the Prop:Order will be appended to the Key Order. So if the key is on say Inv:Date, Inv:Time and you set Prop:order = 'Inv:Total' then the result is ordered by Date, Time, Total.

If you use the SET(file) form of the SET statement then the prop:order will be used first (as there are no key fields.)

Internally a unique order is required. If necessary the engine will add the primary key fields to the end of the order, to make it unique.

Prop:Order (and prop:SQLOrder) work for FILEs and VIEW's. It must be set (or cleared) before the call to SET. It is not reset automatically (except on an OPEN or CLOSE, where it is reset to a blank string.) Changing the value of this property while inside a SET / NEXT loop will have no effect on the loop.

prop:Paging

Traditional drivers make use of a SQL Cursor to build a result set in the database. The driver can then access that cursor to easily move forward and backwards in the result set, update records and so on. This is a simple and effective approach.

There are however disadvantages with this approach. It requires the server to build the whole result set on the server, even if only a small number of records will actually be retrieved. For example, a Browse on a Customers table will trigger a (potentially large) cursor on the back end, even if the user only views the first page of results. And since that user may sit on that browse for some time, the whole cursor is maintained in the database for that time, consuming server resources.

Object based drivers take a different approach. Result sets are constructed as "page requests". So a limited number of rows are requested from the server, these are fetched, and then the server is free to carry on as normal. After the request is made (and fulfilled) no resources are consumed on the server.

However all magic comes at a price. There is some overhead in both creating these pages, and in calculating the next page to fetch. If the result set is known to be "small" (ie will fit into client-side memory) then prop:Paging can be set to false. This allows the engine to bypass some code and so slightly improve performance. It also results in one less query to the database (since there's no "empty last page" to fetch.)

Prop:Paging works for FILE and VIEW structures, and is set to true by default. It must be set (or cleared) before the call to SET. It is not reset automatically (except on an OPEN , CLOSE , CLEAR, CLEARPROPS, where it is reset to true.) Changing the value of this property to true or false while inside a SET / NEXT loop has no effect.

Prop:Project

See prop:Project

Prop:SqlFilter

Prop:SQLFilter is supported.

Prop:SqlOrder

Prop:SQLOrder is also supported. However note that in the FILE context, there is no need to (and you should not) use a SQL Alias (or table name) when identifying the field(s). This is different to the traditional drivers which require the use of the SQL Alias. The use of SQL Aliases in the property is allowed with object based drivers, and it will work, but this technique is brittle.

There is a catch though. Prop:SQLOrder allows for the result set to be sorted by a calculated value. For example;

ORDER BY YEAR(CURDATE()) - YEAR(birthdate) ASC;

However it should be noted that since these values are opaque to the driver, it's not possible for the driver to reliably "page" though the file when this property is used. For this reason when making use of the Prop:SqlOrder field the entire result set is returned. Consider using prop:Limit in cases like this to put some upper bound on the number of records returned. This is only an issue when using a calculated value in the order.

Prop:ReadOnly

When a record is read from the databases, some work is done to ensure that a subsequent PUT or DELETE command will be able to update that same record in the database. If prop:ReadOnly is set then this work can be avoided.

Since this property is used at the Table level, it is not cleared by prop:Clear, nor is it reset in any way. So if you use this to improve performance in a SET / NEXT context you must clear it again after loop is complete.

If you attempt to do a DELETE or PUT on a row loaded when this property was set then a Record Not Available (33) error is returned.

NOMEMO

Doing a SET/NEXT loop fetches all the fields in the table. If the table includes MEMO and/or BLOB fields, and you are not planning to use those fields in your loop, then you can improve performance by using the NOMEMO command. Traditional SQL drivers do not support NOMEMO, but object based drivers do.

If you do a PUT inside thNOMEMOe loop, and  is active, then the MEMO and/Or BLOB fields are not updated.

NOMEMO is automatically cleared by a call to GET or REGET. It is also cleared by a call to SET, however it will remain "active" for the duration of the SET / NEXT loop. Changing this property inside the loop has no effect on the results returned by the loop.

Subsets

The easiest way to think of a subset is as a complex filter. A filter (prop:Filter) is great at getting a subset of records into a result set. They work well when the decision of whether to include a row or not can be determined simply by looking at the row. So to get all the records in a specific date range, you need to look no further than the date column.

There are a group of queries though that need to go deeper. Once the condition lies outside the row itself, then subsets can become useful. SQL allows multiple results sets to be created, and then those results combined (in various ways) to make a single result set. The four operations are UNION (and related UNIONALL), EXCEPT and INTERSECT.

Command
Description
EXCEPT Subtracts sets (A - B).
INTERSECT Finds common elements (A ∩ B) (removing duplicates)
UNION Combines sets (A + B) (removing duplicates)
UNIONALL Combines sets (A + B) (keeps duplicates).
If you know your data is unlikely (or unable) to contain duplicates then UNIONALL is somewhat faster on the server because the server then does not check for duplicates.

The result of each operation results in a result set, and the next operation acts on that set. So it's possible to start with a result set (UNION command) then add another set (UNION) remove two sets (EXCEPT , EXCEPT) and follow that up with another UNION and an INTERSECT.

A subset contains just enough fields to identify the rows that are included (or excluded.) In practice this mostly means that the primary key fields of the primary table are used. (However some other combination of fields in the primary table could be used). Each command in the subset (UNION, EXCEPT, INTERSECT, UNIONALL) MUST return the same set of fields (same number, same type). The field names might be different, but they are ultimately affecting a single list, so they must match.

For example, say there is a list of customers. The goal is to find out which good customers (customers who purchased something in 2022 and 2023) did not purchase anything in 2024.
To do this the program has to look at the customer list as a whole, including customers who purchased in 2022, and 2023, but excluding those who purchased in 2024.

Here's another example; You have a list of employees and need to know which employees took no sick leave this year. In other words, take the list of employees, and remove from that list any employee that took sick leave.

A third example involves just the primary table. Each employee has a manager (which is denoted by the ManagerId field). Each employee has a role (for some it's "Manager") and each employee has a location (for some it's "New Jersey".) The goal is to find all the managers who do not manage someone located in New Jersey.

All three of these examples will be fleshed out below.

A Subset starts with a usual SET command on the table of interest (this is known as the Primary Table). In the first example it was customers, in the second and third it was employees.

The subset is created using the (explicit, or implicit) SUBSET command. This command indicates the key identifiers of the parent table which constitute the list. In the first example this would be the CustomerId, in the second and third the EmployeeId. This command might bb implicit, meaning that you don't need it at all. If it's not used then the primary key fields of the primary table are assumed. In that case the subset is created by using one of the commands.

Then the subset of identifiers can be built using any number (and combination) of the UNION, UNIONALL , EXCEPT and INTERSECT commands. UNION adds rows, EXCEPT removes them, and INTERSECT joins them. All these commands only add or remove unique values. If you wish to add non-unique values then you can use UNIONALL .

Example 1

Which customers purchased something in 2022 and 2023 did not purchase anything in 2024?

Set(Cus:NameKey)
Subset(Customers,Cus:Id)             ! Optional since the Cus:Id is the Primary Key Field.
Subset(Orders,Ord:CustomerId)       
Union(Orders,'Year(Ord:date) = 2022')
Intersect(Orders,'Year(Ord:date) = 2023')
Except(Orders,'Year(Ord:date) = 2024')
Loop
  Next(Customers)
  ! etc
End


Let's work through this example;
Firstly the SET command behaves as it always does. It sets the starting point, sort order, filter, project list and so on. That part is unrelated to the subset part that follows.

The SUBSET command binds a table, and identifying columns to the  commands that follow. In this example two tables are involved. The first is the Customers table (consider this to be the primary table).  The field(s) are the identifying fields which are in play. If the SUBSET for the primary table is not included, then the primary key fields for the primary table are assumed.  (In this example it is specifically included for clarity, but it could be omitted.)

The second SUBSET command adds the Orders table, and indicates the foreign key fields(s) which match the primary table. In this case Ord:CustomerId matches the Cus:Id field. Note that the fields don't need to be the same name, but they do need to be the same type. They also need to be the same number of fields, no more and no less. If the primary table has three fields in the primary key, then three fields are needed here as well. The SUBSET command allows for up to 10 fields to be added.

Aside: In this example the subset is working on a single foreign table, the Orders table. However this is not a limit - any number of foreign tables can be used when creating the subset. The SUBSET command for each table only has to be called once for each table regardless of the number of UNION, UNIONALL, EXCEPT and INTERSECT commands used.

The next three lines determine which records are added and removed from the subset. The UNION commands add customers that ordered something in the first year, the INTERSECT removes any that don't exist in the second year as well, and the EXCEPT removes any which exist in the third year.

The LOOP through the resulting records then continues as normal. Fundamentally all the subset does is act as a filter on the SET / NEXT loop.

Example 2

Which employees took no sick leave this year?

Set(Emp:NameKey)
Subset(Leave,Lea:EmpId)
Except(Leave,'Year(Lea:Date) = ' & Year(TODAY()) & ' AND Lea:Type='''Sick''')


Since the first filtering command is EXCEPT, and not UNION, the list starts with all employees included. (This would also be true if the first command was an INTERSECT, but if the first command is an INTERSECT then that's the equivalent of starting with UNION since again it starts with everyone, and remove anyone not in the INTERSECT.)

This is in contrast to the earlier example which did start with UNION, and hence does not include anyone in the initial list.

So in this example the list starts with all employees, and then removes any who took sick leave this year.

Example 3

Find all the managers who do not manage someone located in New Jersey.

Set(Emp:NameKey)
Union(Employees,'emp:role = ''Manager''')
Subset(Employees,Emp:ManagerId)
Except(Employees,'emp:location = ''NJ''')


In this case the list starts with a UNION to get a list of the employees who are Managers.
Because it did not start with a SUBSET, the SUBSET is implied. Since the primary key field is Emp:Id, that is the implied SUBSET field on the primary table.

After the call to Union there is a SUBSET command, which is on the primary table, but indicates a field other than the primary key field. This  indicates that for the next commands, the Emp:ManagerId (which is the same type as Emp:Id, and ultimately contains an Emp:Id value) is the subset field. (In this case the employee has a Emp:ManagerId field indicating his manager.)
So all the Managers from the original list, are removed, for all the employees located in New Jersey.

Limitations

There are some limitations to using subsets;

Firstly, all the tables involved need to be in the same database. At this stage cross-database subsets (and by implication cross-driver subsets) are not supported.

Secondly, if at all possible, either set prop:Paging for the request to be false, or set prop:Limit as high as possible given the RAM constraints of the program. Doing subset queries is (relatively) expensive on the server side, so you do not want to do it, and then only get a single page back (if the intention is to get the following pages.)

Implied

Two statements in a Subset are implied in specific cases. These simplify the calling code;
  1. If no SUBSET call is made, on the Primary table, then the Primary Key fields are used. For example this line;

    SUBSET(Customers,Cus:Id) ! This line is not necessary

    is not necessary (if Customers is the primary file, and Cus:Id is the primary key field.)
  2. If the first command (UNION, UNIONALL, EXCEPT or INTERSECT) is a EXCEPT or INTERSECT then the subset starts with all the rows in the primary table. For example in this sequence

    UNION(Customers)  ! This line is not necessary
    EXCEPT(Customers,'Cus:Status = 4')


    it is not necessary to call the UNION command, that is implied.

    It should be noted that if the first command is UNION or UNIONALL (on any table, not just the primary table) then the inclusion of all the rows first is NOT implied.
Also implied is that a subset requires at least one command otherwise it is not a subset. So calling SUBSET , and then not calling any of EXCEPT , INTERSECT , UNION or UNIONALL before calling the NEXT (or PREVIOUS) commands simply results in a normal SET / NEXT loop. The SUBSET commands are effectivly ignored.

SUBSET

SUBSET(FILE pFILE,? pParm1,<? pParm2>,...,<? pParm10>)

The SUBSET command binds a table, and one or more fields, to the current subset. If not called at the start of the subset, on the primary table, then the primary key fields of the primary table are assumed. If EXCEPT, UNION, UNIONALL, or INTERSECT are used to limit the set, on a different table to the primary table, then SUBSET must be called first to bind that table, and related fields, to the subset.

For more information on subsets see subsets.

SUSPENDRELATIONS

SuspendRelations(FILE pFile)

The SUSPENDRELATIONS command disables Foreign Key integrity checking for the table. This potentially cascades to other tables using this connection, or other threads in the program. Consult the driver documentation for specific information.

This suspension is terminated by a call to RESUMERELATIONS, or a call to CLOSE.

Use this command in cases where data in the table needs to be modified, but relational updates should not be used. For example when using REMOVE or EMPTY (where the intention is to put the existing data back) or where for some other reason it's necessary to temporarily suspend relationships.

This command is independent of any (or no) calls to ADDRELATION. It applies to the database, regardless of when, how, or by who, the relationships were created.

See also RESUMERELATIONS and Relationships.

SQL

SQL(FILE pFile, STRING pSql,<? pParm1>,<? pParm2>,...,<? pParm20>)

A new command, SQL, has been introduced as a safer approach to calling embedded SQL statements. This replaces the need for the prop:Sql command, which is inherantly vulnerable to SQL Injection attacks.

To understand the solution, it's best to first cover the problem. Consider this statement;

Products{prop:sql} = 'SELECT * from Products where name like ''%' & search & '%'';'

The problem here is that the statement is constructed from variables (in this case search) which may contain harmful text. For example if this came from a web site search bar, and contained the text a%'; drop all tables; then the second command (drop all tables;) would be executed as well. Of course this second command could be anything (none of it good.)  This is known as a SQL Injection attack.

Think of this SQL SELECT statement as a really tiny program. It's written out as text, so the database "compiles" it, and then executes it. Since the drop all tables; command is part of the program it gets compiled and executed.

The solution to this problem is to split the program part from the data part. The program part is compiled, and then that compiled program is passed, with the data, to the database for execution. Since the data part is not compiled, it is not executed, and is safe (regardless of what it contains.) This process is known as parameterization.

The prop:Sql command is incapable of performing parameterization because there is no way to separate the "program" part and the "data" part of the statement.

The object based drivers offer a new command, SQL, which takes in the program part, and data part(s) as separate parameters. In the program part, the parameters are represented as ?n[1] where n is the parameter number. The example above translated to the SQL command is as follows;

SQL(Products,'Select * from Products where name like like ''%'' + ?1 + ''%'';' , search)

Here's another example using two parameters;

SQL(Products,'Select * from Products where name  like ''%'' + ?1 + ''%''' and price >= ?2;' , search , minprice)

The SQL command supports up to 20 data values in the call.
If the same parameter is used more than once then the command can contain the same ?n value, for example;

SQL(Products,'Select * from Customers where firstname like ''%'' + ?1 + ''%'' or lastname like '%'' + ?1 + ''%'' ;' , search )

Note 1: The exact syntax of the SQL statement will vary from one database to another.

Note 2: The SQL statement can be called with just the SQL string (ie without additional parameters), and so can be used to execute any existing prop:Sql statement.

Note 3: The following is from the Clarion documentation on prop:Sql;

When you issue a SELECT statement using PROP:SQL, the selected fields must match the fields declared in the named file or view. In addition, if you use VIEW{Prop:SQL} to issue a SELECT statement, the fields in the SELECT must be ordered based on the field order in the file definition, not the PROJECT sequence.

In object based drivers no such care must be taken. The fields returned can be in any order, whether working on a FILE or a VIEW. The results are matched up to the appropriate fields by name. Indeed any result set can be retrieved, and processed, even if it does not access the file at all. See Result Sets for more on this.

SQL Reserved Words

In the beginning we had ISAM files, and life as good. We labeled the fields in the table as anything we liked. Since there was no server between the program and the data, there was no need for the concept of limiting the names of tables or columns.

Along came SQL databases, and along with those the SQL language. SQL has words which are used in the language, and so cannot be used as column, or table, names. Dealing with this can be simple, or complicated - it all depends on your context.

The first, and most important thing to understand is that the NAME and the LABEL of a field are different things. The label is the thing you use in your code. And the label can be (almost) anything you like. Code that works against ISAM tables can be the exact same code that works against SQL tables. SQL reserved words do not affect your code. Do not read this section in the context of the label - that is not applicable here - this section is the context of the Column name as stored in the SQL database.

In the dictionary this is represented by the External Name setting for the column. Again, this is independent of the label. If the external name setting is blank, then internally the name is set to be the (upper case version of the) label, not including the prefix.

There are two basic strategies that can be employed when you have a SQL reserved word as the name of a column. Which one you choose will depend on your context. Let's deal with the easy context first;

If the table is new to the database then simply changing the External Name (or indeed, if it's really new, the label) to a non-reserved word is the ideal. Figure out a different name for the column, and move on. In the long run this is the best option because it makes ad-hoc queries against the database (especially from non-Clarion programs) easier. If you can do this, do it.

If, for whatever reason, the name of the column in the database is fixed and cannot be changed, then the driver will wrap the word in quotes[1][3] when it is used in a SQL statement. You don't need to do anything, this happens automatically. The only time it becomes a problem is when you start using the column name in properties, SQL statements and so on. And of course when making ad-hoc queries from other programs.

But wait, there's more. When you wrap the name in quotes like this, it (often but not always) becomes case sensitive. In other words "User" is different to "user" is different to "USER". Again this isn't a train-smash, it just makes using the name in ad-hoc SQL more precise.

If you have a reserved word as a label in your dictionary, and you do not set an external name, then the driver has to choose a case for the word. All the drivers (traditional, and object based) choose to make the word uppercase in this situation. If you want to keep the name of the column as-is, but at the same time make it not-upper-case, then you can set the external name in the dictionary (with or without quotes [2]) and the case of that setting will be maintained.

If after reading all of this, your conclusion is to avoid reserved words as column names (either via the label, or via the external name) then you have come to the right conclusion. If however it's too late for that, then welcome to a pretty large club.

Note 1: Most databases use double quotes but not all. Some use square brackets, or back ticks, or whatever. This is mostly handled for you by the driver so not something you need to worry about.

Note 2: Traditional drivers require that when reserved words are used in External Names that the necessary quotes, or brackets, or whatever are included. Object Based drivers do not require this, and will use the appropriate delimiter when necessary. Object based drivers don't mind the quotes or brackets being added in the dictionary, but leaving them off allows the driver to use the appropriate one as needed.

Note 3: The traditional SQLite driver has a bug - it can CREATE a table using quotes around the external name, but it can't OPEN a table that has an external name with quotes. So with the traditional SQLite driver it's necessary to set the External name to something that's not reserved. 

prop:SQLFilter / prop:SQLOrder

prop:SQLFilter and prop:SQLOrder allow you to inject absolute SQL code into the statement that is being sent to the server. The code is sent as-is, and as such is not parsed, processed, optimized or altered in any way. This is both a strength and a weakness.

The strength is that it gives you very explicit control over these  clauses in the SELECT statement. They can either override, or append, the regular clauses.

The weaknesses though are significant, and so their use should be both sparing and careful. You should use these only when all other options have been exhausted. In no particular order;

STREAM

While traditional SQL drivers did not implement STREAM (or FLUSH), the object based drivers do implement them to alert the database that a batch of writes is about to occur. This allows the database to "wrap" the writes in a block to improve performance. When upgrading programs that originated in ISAM files (like Topspeed), which used STREAM and FLUSH for performance gains, those performance gains are now present in SQL as well.

Exactly how this wrapping occurs (if it occurs at all) is up to the individual driver. Each database has different mechanisms for speeding up bulk-write performance.

Unlike ISAM drivers, in SQL STREAM does not generally lock the table, however this is database dependent, so consult your driver documentation to see if it has any locking effect.

You cannot call STREAM while inside a transaction. Doing so will result in the error code being set to 56 Logout Already Active.

For more information see the Performance section of this document.

Note: In practice, for most drivers, it's likely that calling STREAM starts a transaction. Since transactions are based on the connection (not the file) this also implies that any other files on the same connection will also effectively be part of the STREAM. However STREAM may do more than just starting a transaction though, so for bulk writes it is recommend to use it.

prop:Text

From the Clarion Documentation;

Prop:Text is an array property of a FILE that sets or returns the specified MEMO or BLOB field's data. MEMO and BLOB controls are negatively numbered, therefore the array element number must be a negative value. MEMO and BLOB declarations begin with -1 and decrement by 1 for each subsequent MEMO and BLOB, in the order in which they appear within the FILE structure.

In other words, prop:Text allows access to set or get the value of a field, based on the field number in the structure, and not based on the label.

Traditional driver do not support prop:Text on fields other than memos and blobs. If used on any other field type it returns a blank string.

Object based drivers extend the support for prop:Text to all field types. It returns the text version of the field from the current record buffer, as a string. (See prop:Value if you want the raw memory value). Behavior for memos and blobs is unchanged.

If the field is an array then the "whole array" is returned, as a single string value (usually null padded).

For Example;

s = customers{prop:Text,3)  ! gets the text version of the 3rd field in the Customers File
s = customers{prop:Text,-1) ! gets the text version of the first memo or blob
customers{prop:Text,-1) = 'I saw a bear in the wood' ! Sets the text of the first memo or blob
customers{prop:Text,4} = 1234 ! sets the value in the fourth field to 1234.

Transactions

Clarion offers three commands which facilitate transactions; LOGOUT, ROLLBACK and COMMIT.

The LOGOUT command takes a Seconds parameter and a list of 1 or more FILE structures. The origins of this go back to the ISAM file drivers in Clarion's early days.

The first, and most important thing to understand when dealing with transactions in SQL is to understand that in SQL transactions do not work across a subset of tables. Rather the transaction belongs to a connection, and everything on that connection (and only on that connection[1]) is included in the transaction - whether the file is listed in the LOGOUT command or not.

In object based drivers, connections are pooled on a THREAD basis - thus transactions include commands performed on the thread, but do not include commands performed on other threads[2]. Since only commands executed on the same connection can be part of the transaction, it follows that only commands to the same driver will be included in that transaction.

Object based drivers introduce the concept of Save Points. This is done with the SAVEPOINT command, and with an optional parameter to the ROLLBACK command. Savepoints allow a partial rollback, back to a known good location. None of the transaction is committed, but it allows part of the transaction to be abandoned, and then the transaction can continue.

Traditional drivers will not allow multiple transactions (on the same thread) to be active at the same time. This is self evident from the nature of the ROLLBACK and COMMIT commands - since they take no parameters it is not possible to distinguish between transactions, and hence only one transaction is allowed to be open.

Since the object based SQL drivers make use of server-side transactions, where the transaction is connection based not table based, it is possible to allow multiple connections, and hence possible to allow for multiple transactions to be running at the same time. This is especially valuable if two different databases are being updated at the same time. Note though that these multiple transactions are not related. It's possible for one to commit, and another to rollback. It should not be assumed that all the open transactions end in the same way.

In order to support this behavior, a new LOGOUT command is used, called LOGOUTCONNECTION. A single table is passed into this command - this is referred to as the Primary Table of the transaction. As above ALL tables on this connection will automatically become part of the transaction.

Equally, if a transaction is started with this command the ROLLBACK and COMMIT commands MUST pass the same primary table to the ROLLBACK or COMMIT command. Failure to pass this parameter will result in a GPF.

If a LOGOUT command fails and returns an error code, it is no longer safe to then call ROLLBACK or COMMIT. Calling either will result in a GPF.

  Open(Customers,42h)      ! this will open on the (default) connection pool 0
  Products{prop:pool} = 1  ! this will open on connection pool 1, not connection 0
  Open(Products,42h)
  LogoutConnection(Customers)             
  LogoutConnection(Products)             
  ! some writes happen here
  Commit(Customers)        ! if Using LogoutConnection you MUST use Rollback(file) or Commit(file)
  Rollback(Products)       ! if Using LogoutConnection you MUST use Rollback(file) or Commit(file)



Note 1: Connections are pooled based on thread number, and owner string. If tables have the same driver, and the same owner string, then typically they share a connection. In some circumstances you may want to separate tables onto different connections. You can do this by setting prop:Pool to a number other than 0. The number doesn't mean anything - but tables with different prop:Pool values will not share a connection. Conversely tables with the same prop:Pool value will share a connection (if they're on the same thread, and with the same owner string.)
For example;

glo:owner = 'some database connection string'
Customers{prop:pool} = 1234
Open(Customers,42h)

Products{prop:pool} = 5678
Open(Products,42h)


In the above code, the Customers and Products table will not share a connection. By implication a LOGOUT on one will not include the other in a transaction.

Note 2: This is assuming the file is THREADed. if the FILE structure is not THREADed then

UNION

Union(FILE pFile)
Union(FILE pFile,STRING pFilter)

The UNION command is used when creating Subsets. It allows results to be added to the current result set (not including duplicates).
It typically (although does not have to) follows a call to SUBSET.

Example

Find all employees who took study leave, or who also participate in the bursary program.

Set(Emp:NameKey)
Subset(Leave,lea:EmployeeId)
Union(Leave,'Type = ''study''')
Subset(Bursary,bur:EmployeeId)
Union(Bursary)



Other related commands are EXCEPT, UNIONALL and INTERSECT.

UNIONALL

UnionAll(FILE pFile)
UnionAll(FILE pFile,STRING pFilter)

The UNIONALL command is used when creating Subsets. It allows results to be added to the current result set, but includes a duplicate if the record is already there.
It is a little faster than UNION, so it can be used in cases where the result is known to be unique. It can also be used in cases where duplicates are explicitly desired.

Other related commands are EXCEPT, UNION and INTERSECT.

UPDATE

Update(File pFile,String pFilter,?* pField1,String pValue1,...,?* pField10,String pValue10)
Update(Key pKey,?* pField1,String pValue1,...,?* pField10,String pValue10)

UPDATE is a new command, not present in Traditional drivers. It is similar to a PUT, except that the record can be updated without the need to read it first. Where the update does not require client-side code it is an efficient way of bulk updating records on the server. Multiple fields (up to 10) can be updated at the same time.

Parameter Type Description
pFile File The label of the file to update.
pFilter String A filter expression which expresses the records to update. If this is a blank string then NO records are updated. If ALL the records should be updated then this parameter should be set to driver:all.
pKey Key The set of rows to be updated is constructed from the key fields, and their current values. If this form is used then the pFile and pFilter parameters are not used. If the key is not unique then all the records matching the current key field values will be updated.
pField1..pField10 Any A field in the table.
pValue1..pValue10 String An expression containing the new value. This expression needs to be capable of being executed on the server. It can be a fixed value, or a calculation based on the current value, or on other values in the row.

Note that this value is considered to be an expression. It will be evaluated. So a string that contains say a math value ('4+4+4') will be evaluated to 12. If you actually want to actually set the field to be a string,

Since the update is done completely on the server, the new value is either a constant value, or an expression which can be executed on the server.

This command is dangerous. Because it updates a number of records at the same time, it's very easy to get the expression wrong, and thus affect a large number of records with a single bad command. Some care should be taken to ensure that the command is correct before unleashing it o production databases. And, of course, as always, good backups of the database are always recommended.

Fields in the file record are NOT updated with the values in the database.

Some examples;

Update a field for all rows in the table to a fixed value.

Update(Customers,driver:all, Cus:Notify, 0)

Update a field for a single record, without reading the record first.

Update(Customers,'Cus:Id = 4',Cus:Overdue,true)

OR

Cus:Id = 4
Update(Cus:IdKey,Cus:Overdue,true)


Update a field, based on other fields in the row.

Update(Invoices,'Inv:date < ' & today()-30 & ' AND Inv:Paid = 0',Inv:Overdue, true)

Split a field, populating it into multiple new fields.

Update(Event,driver:all,Eve:Month,'Month(Eve:Date)',Eve:Year,'Year(eve:Date)')



UPSERT

Upsert(File pFile)

UPSERT is a new command, not present in Traditional drivers. It is similar to an ADD, except that if the primary key value(s) already exist in the target table, then the existing record is updated, and no error is generated.

UPSERT only matches the Primary Key values. If other columns are unique, and the update would cause one of those other unique keys to generate a Creates Duplicate Key (40)  error, then that error will still occur.

For example this code;

Clear(file)
! set a bunch of fields here
Get(File,PrimaryKey)
If Errorcode()
  Add(File)
Else
  ! reset all the fields, except the primary key)
  Put(File)
End


Is now replaced by;

Clear(File)
! set a bunch of fields here
Upsert(File) 

Validation

There's a lot to like about SQL, but generic advice for SQL is hard. That's because context matters enormously when it comes to making decisions. Every context is different and so the decisions you make will depend a lot on what your context is. Validation is one of these areas where there's no universally right answer - the correct approach depends on your context.

Clarion templates are set up to do client-side validation. They check the data before sending it to the database. This can be helpful as it saves unnecessary calls to the database. It also allows the client to impose restrictions which may be program related, not necessarily database related.
(For example, the program may be limited to adding sales transactions, but not be able to add expense transactions.)

If the program "owns" the database, and all the programs adding data to the database are coded well, then this approach works well. It also saves work for the database to do (and in some cases this is important because the database CPU may be the constraining resource.)

However if the database is more open, or if other programs can write to the database (including general access programs like SSMS, pgAdmin and so on) then server-side validation may be a good thing to do as well. Traditional drivers do not support server-side validation, but object based drivers do.

In this context, to maintain data integrity, data validation is done on the server before data is written to the database. While the dictionary editor has options for Field Validation (on the Validity Checks tab for a field), these are not necessarily exhaustive possibilities from the SQL point of view. Additionally, these options are not part of the file structure, so extra code is needed to apply then to a FILE structure. These are therefore client-side validation rules.

Object based drivers allow for server-side validation rules to be placed in the Extended Name. This supplements (but does not replace) the Validity Checks tab (which generates Client-Side code), by allowing for Server-Side validation (in addition to client-side validation.) Possible options are;

Option Meaning
> n
>= n
< n
<= n
The value must be greater than, less than, or in a range of values. Two terms can be used.
Price | >  10 | < 1000
The above creates a range from 10 to 1000 (exclusive).
<> n
<>
The value must not equal n.
If n is omitted then a blank string (or 0 for numeric fields) is assumed.
BOOLEAN The value must be 1 or 0. If this is set, then the above < and > values (if they exist) are ignored. This can be used with, or without the NOTNULL option. Note that if the field is a string type then the value MUST still be one of '0' or '1'. 'True' and 'False' are not considered to be boolean values.
DEFAULT(n)
DEFAULT=n
When doing an ADD or APPEND, if the field is not supplied, then default the value to n. Do not use quotes around n.
INLIST(a,...,z) A comma separated list of allowed values. Do not use quotes on string values.
NOTNULL Null values are not allowed
REQ The value cannot be null, and cannot contain a 0 or blank. This is the same as NOTNULL | <> |
VALIDATE or
CHECK
A validation exrpression which is used as the SQL CHECK clause when creating the table in the database.
Validate(price % 100 = 0)

In this table n can be a number, or a string, depending on the type of the field. Quotes should NOT be used around string values.

Examples

Age        Long,Name('Age | > 0 | <= 130')
LastName   String(100),Name('LastName | <> ')
Paid       Byte,Name('Paid | Boolean ')
Score      Byte,Name('Score | Default(100) | NotNull')
Color      String(10),Name('Color | InList(Red,Blue,Green)')
Evaluation String(20),Name('Evaluation | Req')
Price       Long,name('Price | Validate(price % 100 = 0)')

prop:Value

Traditional driver do not support prop:Value on fields other than memos. If used on any other field type it returns a blank string.

Object based drivers extend the support for prop:Value to all field types. It returns the "raw memory" value of the field (not the text version of the field) from the current record buffer, as a string. See also prop:text . Behavior for memos and blobs is unchanged.

If the field is an array then the "whole array" is returned, as a single string value (usually null padded).

Note that Cstring and Pstring values are "full length", not null-terminated. The first char in the Pstring is the number of bytes the string uses, however  the full string length is returned.

For Example;

s = customers{prop:value,3)

Detecting an Object Based Driver

To determine if a table is using an object based driver, or not, at run time, use prop:OBD;

If table{prop:OBD}
  ! object based driver
End

Getting the Object for the FILE Structure

The first time the FILE is touched (on a thread), an object for that file is created. Assuming the FILE is ,THREAD then each thread will have it's own instance of the File Record, and each thread will have it's own instance of the Object. In most situations you will not want to access the object directly - that's what the File Driver commands are for.  However if the driver class has extra methods, or perhaps extends what a method returns, then you may want to call the object directly. This is exactly the same as calling the object via the File Driver commands, but can be desirable in some situations.

For example, the DOS 2 Driver supports files larger than 2 (or 4) gigabytes. Since the regular BYTES command returns a LONG, to get the actual size of the file you need to bypass the BYTES command. Since the BYTES method returns a BIGINT (Currently a REAL) if you call the method directly you can get a larger number returned.

Doing this is very straight-forward. In this example  ImportTable is the name of the FILE structure.

ImportTable  &DriverFileDos2Class
r  real
  code
  ImportTable &= Import{prop:object}
  r = ImportTable.Bytes()


Note that threaded FILE structures, by extension imply that each thread has it's own object. So the object reference should be declared locally (as above) or, in the case of  a global declaration, that declaration should be threaded as well. For example;

glo:ImportTable  &Dos2DriverClass,Thread

Note: This example is somewhat contrived. The BYTES64 command would work just as well.

prop:Object can also be used for keys and blobs. When used in this way it returns the DriverFileClass object for the FILE which contains that KEY or BLOB.

Example

SomeProcedure   (Key pKey)
somefile  &FILE
  code
 
somefile &= pKey{prop:object} ! somefile now points to the FileDriver object for the table that belongs to that key.

Views

BINDFIELD

BindField(VIEW pView,String pName, *Long pLong)
BindField(VIEW pView,String pName, *Real pReal)
BindField(VIEW pView,String pName, *String pString)
BindField(VIEW pView,String pName, *Cstring pCstring)


BINDFIELD works for VIEWs and well as FILEs. For more information see BindField.

CLEARPROPS

This new command resets all the properties associated with the VIEW back to the default values. It is automatically called when the VIEW is OPENed or CLOSEd.

DELETE(View)

When DELETE is called on a VIEW, then the record in the Primary file is deleted. This does not invoke client-side relational deletes or updates. It effectively bypasses any client-side code which does relational updates or deletes.

Of course, if there is server-side code (created by Relationships, or otherwise) then server-side actions will occur.

JOINTABLE

JoinTable(VIEW pView, FILE pParentFile, KEY pKey, Byte pJoinType, <*? pField1>, ... <*? pField10>)
JoinTable(VIEW pView, FILE pParentFile, FILE pJoinFile, Byte pJoinType, String pExpression)


Parameter Type Description
pView View The View label of the View to attach the new Join to.
pParentFile File A FILE structure already in the view, that this join will join to.
pKey KEY A key in the table being added to the view as a Join. In other words, a key in the new FILE being joined into the View.
pFile FILE The new FILE being joined into the View.
pJoinType Byte One of driver:LeftJoin, driver:InnerJoin, driver:RightJoin, driver:FullJoin, driver:CrossJoin, driver:SelfJoin.
pField1..pFieldn Any Fields in the parent table, which match fields in the pKey.
pExpression String An expression detailing which columns in the parent table and child table match. For example 'Lin:Invoice = Inv:InvoiceID'
These expressions can include local variables bound using the clarion BIND command.

Object based drivers allow a VIEW to be constructed, or altered at run time. The JoinTable method allows another table to be Joined to a VIEW. Either the KEY,FIELD,..,FIELD approach can be used, or a custom join expression can be used.

The ParentFile parameter determines the Table, that is in the view already, to which the new table will be joined.

When declaring a VIEW in code, only two join types are possible; driver:InnerJoin (if the INNER attribute is used) or driver:LeftJoin (if the INNER attribute is not used).  However object based drivers support the additional join types, and these can be set using either the JoinTable command, or by setting prop:inner on an existing Join.

Examples;

JoinTable(InvoiceView,Invoice,Cus:IdKey,driver:leftJoin,Inv:CustomerId)

JoinTable(InvoiceView,LineItems,Product,driver:RightJoin,'Lin:ProductId = Pro:ProductId')

Bind('somevar',somevar)
someevar = 'Sup:Id'
JoinTable(InvoiceView,Product,Supplier,driver:RightJoin, 'Pro:Supplier = someevar')


Bind('SomeExpression',SomeExpression)
SomeExpression = 'Con:Email = Cus:Email'
JoinTable(InvoiceView,Customer,Contacts,SomeExpression)


See also ProjectField.

prop:AutoProject

When the primary file in the VIEW, or a JOIN in the VIEW, is declared with no PROJECTed fields, then the view engine simply includes all the fields (excluding MEMOs and BLOBS) in the VIEW. Should you need finer control, you can set the prop:AutoProject property as follows;

VIEW{prop:AutoProject,fileindex} = Driver:ProjectFields ! default
VIEW{prop:AutoProject,fileindex} = Driver:ProjectBlobs
VIEW{prop:AutoProject,fileindex} = Driver:ProjectAll
VIEW{prop:AutoProject,fileindex} = Driver:ProjectNone

This allows the developer to control which combination of fields and blobs are automatically included.

The fileindex is the number of the file in the view to set the property on. If that parameter is omitted, or 0, then the setting is applied to the all files in the VIEW.

Note that even if prop:AutoProject is set to driver:ProjectNone, primary key fields will still be projected to support paging, POSITION and REGET.

Setting prop:Distinct or prop:NotDistinct will actively suppress the auto-generation of primary key fields, because their inclusion would result in all rows being distinct, and no rows being not Distinct.

Setting prop:NoKeyProject allows fine control over which key fields are automatically included. Setting prop:AutoProject to Driver:ProjectFields, or Driver:ProjectAll renders the prop:NoKeyProject property moot.

The prop:AutoProject property only applies to cases where no fields for a file, or join, are included in the view. It does NOT apply if any fields (in the primary file or join are projected.) It can be set for any file in the view, however it will only be used if no fields are manually projected.

Prop:Distinct

Before calling the SET(table / key / view) command, the prop:Distinct property can be set. This will add the DISTINCT clause to the SQL SELECT statement. This tells the server to only return unique records, duplicate records are not included in the result set.

customers{prop:distinct} = true
Set(Cus:NameKey)
Loop ...


This property is not reset to false except on a OPEN , CLOSE , CLEAR and CLEARPROPS.  If you wish to re-use the VIEW with this property off, then you must set it to false (if you previously set it to true).

For example;

Say you wanted a list of all customers who made a purchase in January. Your VIEW would look something like this;

VIEW(Invoice)
  JOIN(Cus:PrimaryKey,Inv:CustomerId)
    Project(Cus:Name)
  End
End


This would return the customers for all invoices, and the same customer would appear multiple times if they made multiple purchases.

However if
View{prop:Distinct} = true
is set before the call to
SET(View)
then only DISTINCT records will be returned by the NEXT loop, and so each customer would only appear once in the list.

There is one important point to note;
In the above example no fields are projected from the INVOICE table into the VIEW. In traditional Clarion this means ALL the fields would be projected (resulting in the DISTINCT clause "failing" since each invoice row is unique.) For this reason, when prop:Distinct is set to true ONLY explicitly projected fields are included in the VIEW.

Equally, because the primary key values cannot be included in the result set (since they would break DISTINCT) paging on this result set is not possible. The entire result set will be fetched from the server.

See also prop:NotDistinct .

Prop:Name

After the View has be OPENed and before it has been SET you can use the {prop:name,n} of a field to set the field value to be a SQL function. For example;

InvoiceView   VIEW(Invoices)
                PROJECT(Inv:Number)
                PROJECT(Inv:Total)
              END

InvoiceView{PROP:Name,2} = 'SUM(lin:Quantity * lin:price) AS InvoiceTotal'


This expression is then passed through to the SQL engine, and the result is determined by the engine.

Traditional drivers require the field names to be carefully set to their aliased SQL value.

In object based drivers the expression (the part before the AS separator) is passed through the Expression Translator , so you can use Clarion field names in the expression (as in the example above.)

Common SQL functions that can be used here are SUM, COUNT, AVG, MAX and MIN. However there are a lot of Aggregate and Scalar functions which are supported by the SQL database, and they can all be used here.

An alternative syntax for the prop:Name setting is as follows;

view{'field_label',PROP:Name} = string

This works the same as the {prop:name,n} approach, but is more flexible in that the exact position in the view is not the determining factor.

Dummy Table Join

This next part is new to object based drivers.

In the above example the Invoices table contains a field called TOTAL. Hence it can be PROJECTed.
In some cases though there may not be a suitable field in the table to receive the calculated value, and the language does not allow for other variables to be included in the View structure.

In this situation a "Dummy Table" can be used as a placeholder for the local variables. This Dummy table can be declared using the DOS (or DOS2) file driver, and the file itself does not need to be Opened. Consider the case where this DummyTable is declared, and contains a field called Field1.

This Dummy Table can be JOINed in the View. This join will be ignored by the View Driver (and not passed to the SQL engine) because Joins are required to be on a single file driver. However, as a special case, fields that have a name attribute, and include the AS clause, will be Projected, and mapped onto this dummy field. So, for example;

InvoiceView   VIEW(Invoices)
                PROJECT(Inv:Number)
                JOIN(DummyTable,'')
                  PROJECT(Dum:Field1)
                END
                JOIN(LineItems,Lin:InvoiceKey,Inv:InvoiceNumber)
                  PROJECT(Lin:Id)
                END
            END

    InvoiceView{PROP:Name,2} = 'SUM(lin:Quantity * lin:price) AS InvoiceTotal'

Because the DummyTable has a different driver it is "ignored" from the SELECT statement. However the Dum:Field1 will be populated by the NEXT or PREVIOUS command to the value returned by the server (ie the SUM) as if it belonged to the parent file.

Note that in this example the calculation is not on the view table itself, but on a joined table (LineItems). If you do this you will need to add the child table to the View as well. If you do not want to get a row returned for each line item, then make use of the Group By property to limit the view to the fields in the parent table.

InvoiceView{prop:GroupBy} = 'Inv:Number'

Prop:NotDistinct

The opposite of distinct result sets, are sets where only records that are duplicated are included. 

This property is not reset to false except on a OPEN , CLOSE , CLEAR and CLEARPROPS.  If you wish to re-use the VIEW with this property off, then you must set it to false (if you previously set it to true).

For example;

Say you wanted a list of all customers who made more than one purchase in January. Your VIEW would look something like this;

VIEW(Invoice)
  JOIN(Cus:PrimaryKey,Inv:CustomerId)
    Project(Cus:Name)
  End
End


This would return the customers for all invoices.

However if
View{prop:NotDistinct} = true
is set before the call to
SET(View)
then only records which would appear multiple times will be returned by the NEXT loop. Each customer will only be returned once, but this loop is telling you all the customers who have got multiple invoices in the period.

There is one important point to note;
In the above example no fields are projected from the INVOICE table into the VIEW. In traditional Clarion this means ALL the fields would be projected (resulting in the NOTDISTINCT clause "failing" since each invoice row is unique.) For this reason, when prop:NotDistinct is set to true ONLY explicitly projected fields are included in the VIEW.

Equally, because the primary key values cannot be included in the result set (since they would break NOTDISTINCT), paging on this result set is not possible. The entire result set will be fetched from the server.

See also prop:Distinct.

Prop:Inner

In the View declaration you can use the attribute ,INNER to signify that the Join should be an INNER join. Traditional drivers also allow the property to be set in code, using

View{prop:inner,n} = 1

Object based drivers allow you to set the Join type in code using the same syntax, however more join types are supported;

driver:LeftJoin        Equate(0) ! this is the current clarion default when not ,INNER
driver:InnerJoin       Equate(1) ! this is the Clarion ,INNER property
driver:Rightjoin       Equate(2)
driver:FullJoin        Equate(3)
driver:CrossJoin       Equate(4)
driver:SelfJoin        Equate(5)


So, to set the join type to be a Right Outer Join you can set

View{prop:inner,n} = driver:RightJoin

Prop:GroupBy and Prop:Having

In SQL it's possible to group the rows of a result set together, typically doing some calculation on them to extract some interesting information. The view engine allows you to set these clauses in the view's SELECT statement.

In traditional drivers these have to use the aliased form of the field name.

In object based drivers these are passed through the Expression Translator , so you can use Clarion field names or function names in the properties. This makes writing them in code a lot easier as the Clarion label for the fields can be used.

While a in-depth discussion of GROUP BY and HAVING are beyond the scope of this guide, it is well worth the effort to get familiar with them, especially if you are generating reports. They are especially effective at consolidating data in preparation for reporting.

prop:NoKeyProject

VIEW{prop:NoKeyProject} = Driver:ProjectAllKeys       ! 0 ! default
VIEW{prop:NoKeyProject} = Driver:ProjectNoKeys        ! 1
VIEW{prop:NoKeyProject} = Driver:ProjectPrimaryKeys   ! 2 ! recommended

When a traditional driver generates the SQL SELECT statement for a VIEW, it automatically includes all key fields across all keys across all files in the VIEW.
This is equivalent to prop:NoKeyProject=1 or prop:NoKeyProject=driver:ProjectAllKeys .

Since this can be somewhat wasteful, if all those values are not necessary. It has the ability to suppress this by setting prop:NoKeyProject=1. If this property is set like this then only fields expressly added into the  VIEW using PROJECT will be included in the output.

Setting this property to 1 has knock-on effects. Without primary key values being projected (assuming they're not manually projected) it's not possible to PUT or DELETE VIEW records. It also breaks the SET(key) / NEXT(View) pattern. And it breaks the use of POSITION and hence REGET to load the FILE records. This makes it useful for report-type procedures that are read-only, but not useful for browses and some processes, which can turn into writes (unless very specific care is taken when constructing the VIEW.)

This all or nothing approach is somewhat restrictive, and a middle ground is likely more useful in most cases. In most cases including all the key fields (for all keys) is unnecessary [1] . But Primary Key fields are necessary. Including just these fields allows the VIEW to behave normally in all respects, and also allows for paging, updates, POSITION, REGET and so on.

Object based drivers add this interim option by setting prop:NoKeyProject=driver:ProjectPrimaryKeys . When set in this way primary keys fields are automatically projected, but fields from the other keys are not included automatically. (They can of course still be included in the view as an explicit PROJECT )

Choosing a default approach for the object based drivers here is difficult. The most compatible default is to do the same as the traditional drivers, but that's also the least performant. Setting it either way is a trivial driver setting, but that still leaves the question of which to use if the driver setting is not set. On balance, compatibility wins over performance, so it defaults to the same behavior as the traditional drivers. However, changing the setting to be /NoKeyProject=2 is strongly recommended, especially with table that have large numbers of key fields..

Note 1: Projecting all the fields from all the keys prevent the server from making use of Covering Indexes, so in some cases this performance hit can be large.

prop:Order

A VIEW declaration can contain an ORDER attribute. For example;

CustomerView      VIEW(Customer) ,ORDER('CUS:ID')
                       PROJECT(CUS:ID)
                       PROJECT(CUS:FirstName)
                       PROJECT(CUS:LastName)
                     END


The order can also be set at run time. For example;

Open(CustomerView)
CustomerView{prop:Order} = 'Cus:Lastname, Cus:Firstname'


However if there is no order attribute, and there is no run time prop:Order is set, then the VIEW makes use of the default order, as determined when the VIEW is opened. And that default order is inherited from the last SET command issued on the primary file in the VIEW. For example;

Open(Customer)
Set(Cus:NameKey)
Open(CustomerView)


The Clarion template chain (known as the Legacy templates) follow this pattern. The ABC templates make use of prop:Order to set the order as needed.

Note 1: This pattern does not extend to prop:Filter . Using SET(key,key) before the VIEW is opened does not set the prop:Filter for the  VIEW.

prop:OrderAllTables

Traditional drivers only generate the primary file's fields into the ORDER BY clause unless prop:OrderAllTables has been set to true. However object based drivers ignore this property (by treating it as permanently on). The ORDER BY clause generated by the object based view automatically orders all the tables in the view to avoid any inconsistent output.

PROJECTFIELD

ProjectField(VIEW pView, String pExpression, String pAs)
ProjectField(VIEW pView, File pFile,Long pIndex)
ProjectField(VIEW pView, *? pField)


Parameter Type Description
pView View The View label of the View to attach the field to.
pExpression String A SQL Expression. This could be just a string containing a field label like 'Lin:CustomerID'
pAs String A name to use for the result column. If blank, and the expression is just a label, then the SQL column name will be used for the result column name.
pFile File The File in the View to project the field from
pIndex Long The field number of the field in the file
pField Any A field in any table in the view. This is not a string containing the field name, but the field itself.

The Clarion VIEW structure consists of the primary table, plus a collection of JOIN and PROJECT statements. With traditional drivers the fields projected in the VIEW is set at compile time. The object based drivers extend this to allow the fields to be set at run time.

With traditional drivers only fields listed in the FILE structure can be included in the VIEW . That can be limiting because SQL allows calculated fields in the SELECT statement. Object based drivers allow an expression AS name to be added into the VIEW .

A new command called PROJECTFIELD  (Pronounced Pro Ject Field) has been added to allow the developer more flexibility in adding fields to the VIEW, and in controlling the contents of these fields.

The field to project can be set in one of three ways; as a field, as a field number for a specific file, and as a SQL expression and AS name.

For Example;

InvoiceView   VIEW(Invoice)
                 PROJECT(Inv:Terms)
                 JOIN(LIN:InvKey,SQ2Inv:Guid)
                 END
               END

Open(InvoiceView)
BindField(InvoiceView,'Total',total)
ProjectField(InvoiceView,'Lin:Quantity * Lin:Price','Total')       ![1]
ProjectField(InvoiceView,'Inv:Date','')                            ![2]
ProjectField(InvoiceView,Invoice,3) ! 3rd field in Invoice table   ![3]
ProjectField(InvoiceView,Lin:ProductID)                            ![4]
ProjectField(InvoiceView,'AGE(Inv:Date)','Inv:Terms')              ![5]


In the above example;
[1] : A calculation is done on each line item, and the result put into Total (note the necessary BINDFIELD .)
[2] The Inv:Date field is simple added to the view (using the name in a string.)
[3] The 3rd field in the Invoice table is added to the view.
[4] The Lin:ProductID field is added to the view (using the field directly).
[5] The Inv:Terms field is already in the view - the expression for that field is being set.

Calls to PROJECTFIELD must be done after the call to OPEN and  before the view is SET. Fields added via PROJECTFIELD are cleared on CLOSE or CLEARPROPS. Once added there is no command to remove the projected field, other than by closing the view and opening it again.

PUT(View)

The VIEW structure supports a PUT command. It is not often used in code, however it is a powerful feature, so it's worth understanding exactly what it does.

For both traditional and object based drivers, when you do a PUT on a VIEW then only the changed fields are written to the database. In other words, the driver optimizes the SQL UPDATE statement, limiting the fields being updated to the ones that were changed. This makes for a very efficient way of writing changes to multiple records, say in a Process Template procedure.

For fields which are projected (using PROJECT) in the VIEW this makes sense. A record is read, the program inspects the value, and then, if necessary, changes the value and writes it back. There is a catch though. For traditional drivers, if you set a field that's not in the VIEW structure then it might, or might not, be written to the database.

For example;

InvoicesView  View(Invoices)
                Project(Inv:Date)
              End

Loop
  Next(InvoicesView)
  If Today() - Inv:Date > 30
    Inv:Overdue = true
    Put(InvoicesView)
  End
End


At first glance this code is simple - check the age of each invoice, and set it to overdue if it's older than 30 days. However this code will fail if 2 invoices in a row are overdue.

Since Inv:Overdue is not projected in the VIEW, the current value of this field is not refreshed as it loops through the VIEW. Thus for the second invoice (which is overdue) the record is NOT written, because the value of Inv:Overdue did not change.

Because the write may, or may not happen, depending on some earlier value, this code is problematic. It appears to work, and will likely work during development, but will fail (some of the time) in production. This is not ideal.

For this reason object based drivers ONLY write fields which are specifically projected in the VIEW. Using this approach the code will either always work, or, if the field is not in the VIEW, always fail.

This means that programs with this sort of code, which could potentially sometimes fail, will now always fail. Places where a PUT on a VIEW is done should be inspected, and/or tested for this change. While it is uncommon to find a PUT on a VIEW in embed code (uncommon, but possible) the Process Template generates this code if the Action For Process is set to PUT record. A simple utility template is provided to assist in locating any procedures of this type in the app.

For traditional drivers the Clarion help says;

PUT only writes to the primary file in the VIEW because the VIEW structure performs both relational Project and Join operations at the same time. Therefore, it is possible to create a VIEW structure that, if all its component files were updated, would violate the Referential Integrity rules set for the database. The common solution to this problem in SQL-based database products is to write only to the Primary file. Therefore, Clarion has adopted this same industry standard solution.[1]

Object based drivers remove this limitation. If you do a PUT on a VIEW, then all the tables in the view are updated, if any of the fields for that record have been updated. For example say we had a VIEW on Invoices and LineItems, then

Lin:Quantity = 0
Put(InvoiceView)


would only update the LineItems record, not the Invoice record. Only the Quantity field in the LineItems table would be updated, all the other fields will be left as-is.

If fields in multiple tables are updated, then the write is done inside a transaction. In this way either all the tables will be updated, or none will be updated. A write could fail because of a validity check failure.

A point worth highlighting is that writing a new value will not change existing values in the received result set. In other words, if looping through a view, and say a DATE field is updated, this update will not be reflected while inside this (page of) the result set. Put another way; the write to the database will not affect the results of the previous SELECT which is currently being processed by the NEXT loop.

Finally, it is also worth noting that changing Primary Key values of a row using a VIEW PUT is not recommended. Updating primary key values while looping through a result set will likely lead to unexpected behaviors.

Note 1: It is worth noting that traditional SQL drivers do partial updates. Only the changed fields are written on a PUT. So if you want to do child-file updates with traditional SQL drivers then you can simply do a PUT(File) instead of a PUT(View). In other words;

Inv:Date = today()
Lin:QuantityDelivered = 5
Put(InvoiceView) ! Writes the Invoice record
Put(LineItems) ! Writes the LineItems record.


RECORDS

Records(VIEW pView),LONG
Records64(VIEW pView),BIGINT
Records(VIEW pView, Long pNumber),BIGINT


Like the File RECORDS command, the View RECORDS command has been extended somewhat [1].

All the properties supported by the SET command (including Distinct, NotDistinct, Limit, Offset, Having, GroupBy, Filter and Order) are applied when a call to RECORDS is made. Thus the RECORDS command should return the number of rows that a following SET NEXT loop will return.

While RECORDS can be called before the SET command, if any parameters are passed to the SET command then those are not included when RECORDS is called before SET. So for maximum reliability it is recommended that RECORDS be called after SET.

It's worth noting that this call to the server is relatively expensive, as the server has to calculate this value - it isn't stored - so it should be called only when necessary.

The RECORDS function in Clarion is declared as returning a LONG. Since databases can contain more than 2 billion rows, this may not be sufficient space. RECORDS64 returns a BIGINT (which is a REAL at the time of writing this). The new variant of RECORDS (where a number is passed) also returns a BIGINT.

If the Number parameter is passed, then the call behaves the same as calling SET(View,number). The number specifies the number of fields in the ORDER which will act as a starting position (or fixed position) for the result set. See the SET command for more details about this number.

The VIEW must be open for the command to work. If the VIEW is closed an error NotOpenErr (37) is set.

Note 1: In truth the RECORDS(View) command is mostly not supported in the traditional drivers. It is supported in the object based drivers.

REGET

REGET(VIEW View, String Position)
REGET(VIEW View)

The REGET command, when called on a VIEW, does a GET on all the tables in the VIEW. It thus loads all the fields for that table record. This allows the program to quickly load the FILE record for each table associated with the VIEW.

The Position parameter is a string which was retrieved when the view record was loaded. In a browse this position is stored in the queue, and this stored position is used with the REGET before, say, going to a form.

The object based drivers extend the function call slightly, in that it allows the Position parameter to be omitted. If omitted then the current values in the FILE record primary key values are used.

SEARCH

SEARCH(VIEW View, String pSearchTerm)

The SEARCH command supports the VIEW structure as well as the FILE structure. For more general information on using it see SEARCH.
Currently the search only applies to the Primary File of the VIEW - ie the first file.

SET

Set(VIEW View, <Long Number>)

The Clarion documentation for the  SET(View) command, when a number parameter is passed, are ambiguous. First it says;

Number: An integer constant, variable or expression that specifies the start position based on the first number of components of the ORDER attribute. 

Later it says;

The optional number parameter limits the SET to assume that the values in the first specified number of expressions in the ORDER attribute are fixed.

Testing shows that the first one is the one that is implemented[1]; the number indeed acts as a starting position not a fixed value.

In the first statement though it goes on to say;

If omitted, all ORDER attribute components are used.

This is inaccurate. If the number is omitted then the order is simply the order, and none of the fields act as a starting position.

With traditional drivers the time when the value is used is also inconsistent. For ISAM files (Topspeed et al) the value at the time of the SET command is used as the starting position. For SQL files the value at the time of the first NEXT or PREVIOUS command is used. Since in regular code this value is unlikely to change between the SET and NEXT this will not usually cause any difference in code outcome.

With all that uncertainty in mind, the following describes how the object based drivers implement the SET command.
  1. If the  Number parameter is used and the value is greater than zero, then the Number acts as a starting position for the view. (ie This number of fields in the prop:Order act as a starting position.)
    In other words setting Number to 2, means the first two components of the prop:Order property contain starting-position values.
    This is consistent with how the traditional drivers actually behave.
  2. If the  Number parameter is used and the value is less than zero,  then the number acts as a fixed position for the view. (ie The ABS(Number) of fields in the prop:Order act as a fixed position.)
    In other words setting the Number to -2, means the first two components of the prop:Order property have fixed values.
    This option is new - the traditional drivers don't support negative numbers here.
  3. If the  Number parameter is used, then the value of those specified number of prop:Order fields at the time of the NEXT are stored, and used as the value when limiting the result set.
    This matches traditional SQL drivers, but not traditional ISAM drivers.

FILE Set

Hidden in the Clarion documentation is this small nugget;

Either a SET statement on the VIEW's primary file before the OPEN(view), or a SET(view) statement after the OPEN(view), must be issued to set the VIEW's processing order and starting point, then NEXT(view) or PREVIOUS(view) allow sequential access to the VIEW.

This means that instead of using the pattern;

Open(View)
Set(View)
Next(View)


you can use the pattern

Set(Key)                     ! Or Set(key,key) etc. Key is in View's Primary File
Open(View)
Next(View)


This pattern is only valid if there is no SET(View) command.

Set(Key)                     ! this is completely ignored
Open(View)
Set(View)                    ! ...because this SET exists
Next(View)


In this pattern setting of prop:Filter and prop:Order is ignored. 

Set(Key)                     ! Or Set(key,key) etc. Key is in View's Primary File
Open(View)
View{prop:order} = 'whatever'     ! Ignored because there's no SET(view) command
View{prop:filter} = 'whatever'  ! Ignored because there's no SET(view) command
Next(View)


However if the VIEW has a declared ORDER and/or FILTER then this overrides corresponding part of the the SET(key) or SET(key,key).

ProductView   View(Product),FILTER('SQPro:Id > 40')
              End      


  Set(Key,Key)                 ! The sequence part (the second key) is ignored because the View Declaration has a FILTER
  Open(ProductView)
  Next(
ProductView)

ProductView   View(Product),ORDER('Pro:Id')
              End      


  Set(Pro:NameKey)             ! This is ignored because the View Declaration has a ORDER set
  Open(ProductView)
  Next(
ProductView)


Note 1: Not mentioned in the documentation that for traditional drivers, BIND has to be called on the field, or record, in order for this feature to work. By contrast the object based drivers do not require the field or record to be bound. (It doesn't matter if they are bound, it's just not required.)

Subsets

Views support subsets, using the same commands and techniques as for FILE subsets.

The subset is asumed to be working on the primary file for the VIEW.

prop:Sql

As mentioned earlier, prop:SQL can be used to return the last SQL statement executed. This can be useful during debugging and performance testing.

In traditional drivers using viewname{prop:Sql} returns nothing. In order to see the most recent SQL for a VIEW, you must use prop:Sql on the primary FILE instead. So instead of CustomerView{prop:Sql} you would use Customer{prop:Sql}.

With object based drivers this is not necessary. Use viewname{prop:Sql} to return the SQL for the VIEW. The prop:Sql for the primary FILE is not changed by the VIEW.

View Performance

Views are used in different ways in the program. These differences can radically change how the view engine processes the data. Traditional drivers do not "know" how the view will be used, so it has to cater for all eventualities. By contrast the object based drivers introduce properties, which tell the view engine more about how the data will be used. This allows it to optimize behaviour internally to suit the use case.

Consider these use cases; These different use cases impact on the resources needed by the server and the client. This in turn affects the performance of the system, both of the program itself, and the database server (which may be serving other people at the same time.)

Naturally the wiew engine defaults to the same approach that the traditional drivers use - that of keeping all options open so that it works regardless of the use case. That said, it's possible to send some properties to the view to give it an indication of how it will be used, and hence allow it to optimize internally for that behavior. This can result in faster performance, and reduce resource load on the server.

The properties, and their effects are listed below;

Property Default Value View Behavior Use case Effect
prop:Paging true If true, then data is only collected from the view a page at a time.
If false, the view will access the whole result set (just once). Only set it to false where the size of the result set is "small". Consider using
Browses When true, this speeds up opening a browse as only a few records are needed to fill the browse page.
prop:FetchSize
aka
prop:PageSize
30 The size of the page fetched at a time. 30 is a good number for browses (the number of lines on the initial opening page). For reports and processes this number can be higher; 300 or so is a recommended value. Reports, Processes There is some overhead in fetching a page from the server. Fewer pages mean less overhead. But bigger pages mean more time transferring data that might not be used.
prop:NoPrevious false The view will not use the PREVIOUS command. Reports, Processes
prop:ReadOnly false The view will not use the PUT command, nor will any file in the view use the PUT or DELETE command. Reports, Some processes When true, the engine does not set the internal position for each file.

Filters


With traditional drivers, filters can only be applied to VIEW structures. Object based drivers extend this support to FILE structure SET NEXT loops. The information in this section applies to filters for either FILE or VIEW structures.

Client-Side versus Server-Side Filters

Traditional Clarion drivers, have the ability to pass the filter to the server for processing. This is good, you want as much filtering as possible to be done for you by the server. With traditional drivers if the filter is not server-compatible then the whole filter is ignored, and the whole result set is passed back to the program for client-side filtering and this includes the Range.

Aside: There is only one filter sent to the server. There is no actual Range (as suggested by the templates) in the VIEW syntax. The template Range is a simple way for the user to build up a filter, but that's all it is, a filter construction mechanism.

If any part of the filter, including the range, is not compatible with the server then the whole filter (including the range) is passed back to the client for processing. This happens silently, and without warning. This can have substantial performance implications and often leads to the common myth that views are slower than files at retrieving data.

The correct way to deal with this situation (regardless of the driver) is to separate out the incompatible parts of the filter, and move those into client-side code (ie an embed point). Then the compatible parts of the filter (and range) can be used by the server and a smaller result set constructed. However in order to do this developers need to be made aware of where it is happening.

The above applies to traditional drivers. Object based drivers behave differently.

With object based drivers if the filter is not compatible with the server then an error is generated (and optionally displayed.) It will not drop back to client-side processing, and hence the result set will be empty. This will allow the programmer to quickly spot the error and fix it, rather than have the program "just run slow" consuming unnecessary client-side AND server-side resources.

Important:  This is a significant change when it comes to VIEWs. In traditional drivers if the filter is invalid (ie it cannot be processed) then ALL the records in the View are returned in the result set. In object based drivers this is reversed; if the filter cannot be processed then NO records are returned. As programs move to wider usage, especially on the web (AnyScreen et al), this is an important security feature.

If there is one major lesson here, it's that Server Side filters and Client side filters must not be in the same place. Server side filters belong in the template FILTER option, or in code, in the prop:Filter (or prop:SqlFilter). Client side filters should be coded by hand in the appropriate embed point inside the loop.

Filter Functions

When writing filters in Clarion it is common to use Clarion built-in functions in the filter. The object based drivers support this using the Expression Translator, and include support for some additional built-in functions. Additionally SQL users can make use of any Server Side functions supported by their specific database engine. A list of supported Clarion functions is listed in the section Expression Tanslator below.

Server-Side functions

In addition to the built-in Clarion commands (which are translated for you) in object based drivers it is possible to use other SQL commands directly when appropriate. For example the SQL function REVERSE exists, and it has no direct Clarion comparison. Nevertheless if you use REVERSE in a filter then it will be included in the filter passed to the server.

If you use a SQL specific function in the filter, and some other part of the filter is not server-compatible (with your database engine), then the result set returned will be empty.

Make sure the filter is completely SQL compatible. Move client-side filtering code into an embed point.

Additionally, the use of specific SQL Server functions in the filter may be specific to that SQL database. If you port your application from one database to another then the new database may, or may not, support this function. So when using server-side functions be aware that some potential future portability is lost.

Expression Translator

A number of commands (DELETE, UPDATE etc) take a Filter parameter, or other parameters that support Expressions. Some properties (prop:Project, prop:Filter etc) can contain a mix of Clarion code and SQL code. As noted above these expressions are passed through the Expression Translator before being included in the SQL statement. This section of the documentation explores these expressions in more detail and documents the Clarion functions which are available to be used.

Functions

The following are the default support and translation for the drivers. The specific syntax can be different for different databases, and overridden at the driver level so consult the driver docs for any deviations from this.

Function Support SQL Comment
Abs(expression) OBD
ABS(expression)

BeginsWith(substring,string) OBD string LIKE(substring%) ISAM drivers will need to Add and BIND a function called BeginsWith(substring,field) for this to work client-side.
Chr(expression) OBD varies Common translations;
CHR(expression)
CHAR(expression)
ASCII_CHAR(expression)
Clip(expression) All
RTRIM(expression)
expression & expression
OBD CONCAT(expression,expression)
expression || expression
MySQL,MSSQL,MariaDB use CONCAT
Oracle and SQLite use ||
PostgreSQL, DB2 support both.
Date(month,day,year) OBD
varies
Day(date field) OBD
varies
From a field in the database extract the Day-of-the-Month number. [1]
EndsWith(substring,string) OBD string LIKE(%substring) ISAM drivers will need to Add and BIND a function called EndsWith(substring,field) for this to work client-side.
Inlist(expression,string1,string2,...) OBD
expression IN (string1,string2,...)

Inrange(expression,low,high) OBD
expression BETWEEN low AND high
SQL BETWEEN is inclusive of low and high values.
Int(expression) OBD
FLOOR(Expression)

Instring(substring,string,step,start) All LIKE(%x%)
The step and start parameters to INSTRING are ignored, and treated as 1,1.
Left(expression)
OBD
LTRIM(Expression)

Lower(expression) All LOWER(expression)
Match(first,second,mode) All Various depending on Match Mode, and Database engine.
Regular Expression matches are Driver dependent
Month(date field) OBD
varies
From a field in the database extract the Month number. [1]
Null(field) OBD
field IS NULL

expression ^ expression
OBD POWER(expression,expression)
Round(expression,order) OBD
ROUND(expression,Order)

Sqrt(expression) OBD
SQRT(expression)
Sub(expression,start,length) All
SUBSTRING(expression,start,length)
Today() OBD

Trim(expression)
OBD
TRIM(Expression)

Upper(expression) All
UPPER(expression)
Year(date field) OBD
Varies
From a field in the database extract the Year number. [1]

Note [1]: These functions can operate on SQL DATE fields, but also on LONG fields which contain a Clarion Standard Date. In addition they also work on TimeStamp fields and TimeStamp_Date fields.

Fields

Fields are used when constructing filters or other properties.

In traditional drivers these fields have to be specified using the Clarion field name in the Clarion part of the property, and the field Alias name in the SQL part of the property (for example {prop:SqlFilter}). Extending this behavior, object based drivers allow you to use the Clarion field name, or the SQL field name (table.column) or the View Alias name (a.field) in any part of the property. You can also use a plain name without any prefix, and the engine will match that first against column names, and if that fails then it'll look for a label in the field with the same name.

Using the Clarion name is the most portable since then the translation to the alias name is done for you by the Expression Translator.

Using the Alias name is the least robust because it is susceptible to changes in the VIEW declaration. An extra JOIN added (or removed) to the view can break the alias. Thus using the alias name is not recommended, however it is preserved for backwards compatibility.

These names can be used regardless of the position in the property - they can be used when as parameters to Clarion or SQL functions, used in expressions, and you can use both forms of the name in different parts of the property if desired.

Note that GROUP fields do not exist in the database, their "grouping together" is purely a Clarion construction. Therefore you cannot use GROUP fields in a filter expression. The fields in a group exist in the database, and can be used in filters.

If you include a group in a prop:Project then all the fields in the group are projected.

You can also use local variables in a filter.  These will be evaluated and the current value sent to the server when the SET function is called. These variables MUST be bound using BIND in order for them to be available. If they are not bound, or the EVALUATE triggers an error, then the property will be discarded and an error generated.

Dates

Clarion stores Dates in three different ways; using the DATE datatype, using a Clarion Standard Date value in a LONG datatype, and using a complex STRING(8) with OVER(STRING).

TimeStamp          String(8)
TimeStamp_Group    Group,Over(TimeStamp)
TimeStamp_Date       Date
TimeStamp_Time       Time
                   End


The object based drivers do a lot of work under the hood to ensure that you can use any date column in the database (regardless of whether it is a DATE, LONG or over a Timestamp) in a filter. Equally it allows the DATE, DAY, MONTH and YEAR functions to be used on DATE, LONG and TimeStamp-String columns. The functions will also work on the TimeStamp_Date column.

When comparing the TimeStamp_Date field to something, it is necessary to use the TimeStamp_Date name, not just TimeStamp. So

Timestamp_Date = 'yyyy-mm-dd'
is preferable to
Timestamp = 'yyyy-mm-dd'
.

Performance Tips


The underlying driver performance is part of the equation to your program performing well. However another layer of performance exists - perhaps giving the driver more context, or making use of fetures and behaviors available in object based drivers that are not in traditional drivers. The goal of this section is to highlight changes which, if implemented in your program, could improve performance.


Beta Testing

Comparing performance between two drivers is hard. Building benchmarks to measure performance is harder.

For example when converting an application from the TOPSPEED driver to a SQL driver it's often just assumed that the application will go faster. In some places it will, but not in all places. In some places code may need to be changed to allow the SQL driver to be performant.  Certainly SQL can go faster than Topspeed, and out the box it does so in many situations. But equally to maximize the performance gains from the SQL driver some changes to the code may be desirable.

Equally when comparing a traditional SQL driver with an object based SQL driver it's hard to talk about performance as a single entity. There are things the traditional driver will do faster, but the object based driver may do more checking, be more reliable, or allow for more variety in functionality. And there are some tasks the object based driver will do faster.

Performance also involves the template set, the class code, hand-code and the driver itself. Over the next period of time, changes will be made to the template layer, and the class layer, so that generated applications can best make use of the functionality, and abilities of the new drivers.

Server performance is also a consideration. Requests made to a server consume server resources, and that resource usage is seldom factored in when building benchmark tests. The object based drivers have explicitly set out to limit server-side resource usage, which in the real world allows servers to perform better for all users, even if the speed for lab-based, client-side, benchmark tests may be slower.

Benchmarks should take into account the size of the database. Generally speaking things slow down as the database gets larger. So benchmarks on small data sets can deliver different results to the same benchmarks run against large data sets.

Network performance also plays a huge role. Network performance is made up of two factors; the size of the connection (aka the bandwidth, which determines the raw speed of moving the data) and the distance between the client and the server (aka the latency, which determines the overhead time of each request.) Networks with high latency benefit from fewer, larger, requests. Networks with low bandwidth benefit from more frequent, smaller, requests.

Lastly it should be pointed out that performance is a process, not an event. First it must be made to work right. Once that phase has been achieved then it must be made to work fast. Since these drivers are very new, they have not yet been optimized for performance. In ad-hoc testing here they are faster in some tasks, slower in others.

Currently, the drivers therefore make no claims as to overall performance. Naturally they should be "fast enough" for normal usage, and users should not notice a dramatic speed gain, or loss. In the longer run the goal will be to ensure that they perform as quickly as traditional drivers for most tasks, and faster for some tasks.

Performance Levels

From a File Driver perspective, there are fundamentally there are three ways to improve performance in your application.
  1. L1Using the correct File Driver, Driver Settings, Database Configuration, templates and classes to apply globally to the application.
  2. L2The addition of code to the existing code, which give the driver more information about the task and context at hand. For example by setting properties, using commands like BUFFER, STREAM, or LOGOUT.
  3. L3The refactoring of existing code to make better use of  better commands, or new features, to accomplish the root task.
In the documentation below suggestions will be marked with a L1, L2 or L3 to indicate the nature of the change.

Level 1 things are simply to apply, and can make application-wide performance improvements.

Level 2 changes require some code review, but slow parts of the program can be easy to identify, and with small changes can be improved.

Level 3 changes require changing the existing pattern of code, and identifying new patterns that are more suited to the database (this is especially true for programs and programmers raised on ISAM but newer converts to SQL.)

DELETE

The traditional way to delete a record in Clarion is to load it first into the record buffer and then call the DELETE command. This obviously still works.

L2 However object based drivers allow you to skip the READ and simply delete the record with an appropriate filter. For example, if the primary key field is id, and the record id to delete is 10, then instead of

cus:id = 10
Get(Customer,Cus:IdKey)
If errorcode() = NoError
  Delete(Customer)
End


you can just write

Delete(Customer,'cus:id = 10')

If you wish to know the number of records deleted (in this case 1) you can test prop:affected

Delete(Customer,'cus:id = 10')
affected = Customer{prop:affected}


If no records were affected the the Record Not Found (35) error code is set. So this code would also work;

Delete(Customer,'cus:id = 10')
If Errorcode() = NoRecErr
  ! record not found
End


This (slightly) speeds up the delete of individual records.

L3  However if you are deleting records in a loop then a more dramatic performance improvement is possible. For example, say you're clearing out the logs, and so deleting all records from January. Your code might look like this;

Log:Date = Date(1,1,2024)
Set(Log:DateKey,Log:DateKey)
Loop
  Next(Log)
  If Errorcode() then break.
  If Log:Date > Date (1,31,2024) then break.
  Delete(Log)
End


this can be replaced with

Delete(Log,'Date >= Date(1,1,2024) and Date <= Date(1,31,2024)')

Assuming there are a lot of log records, this speed improvement could be very substantial.

Of course getting the filter wrong could be pretty bad, so you might want to check, and test, this sort of statement well.

Explain

In Clarion, a SET / NEXT loop works on a Result Set. This result set is constructed by the database engine when the first NEXT (or PREVIOUS) after a SET command is encountered. This translates into a SELECT statement sent to the database engine.

When a SELECT request, is sent to the server, it is processed by the server, and the server then decides on the best way to build the result set for the request. Understanding what it is doing, and when it is doing badly, can be critical to getting the best performance from a request.

Fortunately the database is willing to show the plan it decides is optimal. If the request is slow, then being able to see the plan gives the developer vital information as to how to improve the plan, and in doing so, improve the performance of the request.

There are three different ways to trigger this explanation;

The result of the explain (the explanation) is sent to DebugView++. The format will vary from one database engine to the next. The magic words to look out for are Table Scan or just Scan. Table Scans, especially on larger tables, are slow.

For example, here is the output from SQLite, for a SELECT where a Key is available;

-- EXPLAINING: ----------------------------------------------
 SELECT A.TITLE,A.YEAR,A.EXTRACT,A.GUID FROM Movies A WHERE (A.TITLE > ?1 COLLATE NOCASE) OR (A.TITLE = ?1 COLLATE NOCASE AND A.YEAR >= ?2) ORDER BY A.TITLE COLLATE NOCASE ASC,A.YEAR ASC LIMIT 30 OFFSET ?3 ;
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-- ID | Parent | Description
-- 08 | 0      | SEARCH A USING INDEX Movies_TITLEKEY (TITLE>?)
-------------------------------------------------------------


The explanation includes the SELECT statement and then the plan for executing it. In this case a suitable key (Movies_TITLEKEY) is readily available.

Here's the same browse though, sorted by a field not in a key;

-- EXPLAINING: ----------------------------------------------
 SELECT A.TITLE,A.YEAR,A.EXTRACT,A.GUID FROM Movies A WHERE (A.EXTRACT > ?1 COLLATE NOCASE) OR (A.EXTRACT = ?1 COLLATE NOCASE AND A.GUID >= ?2) ORDER BY A.EXTRACT COLLATE NOCASE ASC,A.GUID ASC LIMIT 30 OFFSET ?3 ;
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-- ID | Parent | Description
-- 07 | 0      | SCAN A
-- 28 | 0      | USE TEMP B-TREE FOR ORDER BY
-------------------------------------------------------------


In this case there is no suitable key, so a table scan is necessary. (As evidenced by the line SCAN A). So this SELECT will be somewhat slower (depending on the number of rows in the table.)
Adding a key here would make this request faster.

Adding more keys though does come with a cost. The size of the database will get bigger, because that new key needs to be stored somewhere. This is less of an issue than it used to be, since disk space is cheap. In addition adding, or updating records, can be slower since the additional key needs to be maintained at the same time. So while adding more keys makes reads faster, at the same time it makes writes slower. Usually this is a fair trade-off for most tables since reading activity usually greatly exceeds writing activity. However for large tables, or tables accumulating records at very high speed (think IoT logging devices etc) fewer keys may be preferable.

After generating the explanation the driver then processes the command as normal, so the underlying command still works. Naturally there is some performance overhead in this, but the debugging output can be a critical link in understanding where performance is sub-optimal.

Keys

The role of additional Keys (or Indexes in SQL parlance) is one which draws many opinions. This variety is a side effect of several factors - firstly that they behave quite differently in SQL compared to ISAM, and secondly because the context of the table matters. These factors combine to mean there is no "one true answer" to any question about keys.

In ISAM the View engine, as coded in the file driver, is responsible for selecting the key to use to fetch records from the disk. It primarily uses the ORDER of the view to choose which key to use. Once it has selected the key, it then applies the FILTER to return the result set. The Clarion templates push the programmer gently in the direction of building browses and reports around a defined key, and also offer filter construction assistance (ie a RANGE) around that same key.

In SQL the task of filtering the records, and then ordering the result set is passed to the database server. The database engine has a query planner which takes the ORDER BY and WHERE into account when selecting a key. Primarily it uses the WHERE clause, not the ORDER BY to determine which key to use. Therefore in SQL it's important to add keys to match the filtering the program uses.

A common SQL belief amount Clarion programmers is that "SQL does not need keys". There's a perception that SQL database will generate it's own indexes as and when it needs them. It will automatically identify common patterns and optimize itself to the needs of the client program. That perception isn't entirely accurate. While some databases do offer automatic indexing features to a limited extent, SQL databases generally do not automatically create indexes based on query patterns. Most databases collect statistics on table data and use them for query optimization, but they don’t auto-create indexes. For example PostgreSQL’s ANALYZE command updates statistics for the planner, but it won’t create new indexes. Some databases suggest indexes based on workload analysis; for example SQL Server's Database Engine Tuning Advisor or MySQL’s EXPLAIN ANALYZE can help identify missing indexes. But ultimately the Database Administrator (DBA) needs to manually create indexes based on this information.

Again, context is important. If your database has a DBA (and the DBA is willing to do the work) then you should leave the index creation to the DBA. However if this is not the case, then the program is responsible for creating its own indexes. And indexes are very necessary to performance.  The correct index at the right time will vastly improve read performance of the data. This is especially true for browses and reports. Use the EXPLAIN command to understand how your database is using keys and when it is doing a table scan (which likely means another key would be useful.)

Set / Next Loops

The simplest form of the SET/NEXT loop gives very little information to the database about the result set you are expecting. By being aware of the nature of your SET/NEXT look, and using appropritate properties, you can help the driver, and the database, optimize the request for speed, and also for consuming fewer resources. This section explores the thought behind which properties to set, and when to set them.

Question Action Reasoning
Are all the records from this position in the file going to be used, and the total number of records in the table is say under 5000? Set prop:Paging to false. As long as the result set will fit into memory, and as long as all the records (or "most" of them) will be used, this approach is the fastest.
Are you going to use the first records, up to some range limit and then ignore all the records after that? Leave prop:Paging as true. Set a "reasonable" value for the number of records you are expecting in prop:Limit Using prop:Limit reduces the resources used by the server fetching information you won't use. Fetching 1 page with all the data is ideal, but smaller, fewer, pages reduces wasteful rows, at the cost of more requests to the database. Getting everything in 1 or 2 pages is ideal.

STREAM / FLUSH

ISAM files make use of STREAM and FLUSH to speed up multiple writes. So for actions like generating a number of records, or importing data from another source these can have a large impact on performance. These functions revolve around the way data is written first to disk cache, and then ultimately to disk. By default ISAM drivers wait until the disk is physically written before returning - this is best for "knowing" that the write is permanent before continuing.  By using STREAM the program would not wait for this disk-write, and that improved performance by a LOT.

The cost of this speed-up was that STREAM required exclusive access to the file, which in turn blocked other programs on the LAN from accessing the data. This was often a problem which prevented STREAM from being used.

Since SQL drivers do not write to the disk (that's the job of the SQL server) the traditional SQL drivers did not implement STREAM or FLUSH.

Conceptually though, SQL databases do have facilities for improving write performance. If they know that a group of writes will happen in quick succession, they can make allowances, thus allowing the performance to be better. While the exact mechanism may differ from database to database the important thing is letting it know when the batch write will be started, and when it is complete.

The object based drivers use STREAM and FLUSH for this purpose. Before doing a batch update (ADD, APPEND, PUT, DELETE , UPSERT etc) call STREAM. Once it is complete call FLUSH.

Stream(Invoices)
! do a bunch of writes to the Invoices table
Flush(Invoices)


There may be locks that the database creates during the update, so it's a good idea to remember to call FLUSH once it's done. If not done then the FLUSH will be done when the table closes.

Result Sets

In some situations a common pattern is to generate a SELECT statement (either via a SET or SQL command, or via a (now deprecatedProp:Sql statement) and get the results back into a queue. Code can then easily move around the queue to perform functions and so on.

The object based SQL drivers have built-in support for this pattern. There are a number of new commands which allow direct access to the Results buffer. These commands are supported for both FILE and VIEW structures.

While a result set is "linked" to the FILE or VIEW that sent the command, the command itself might not match the FILE or VIEW structure. Attempting to read the result via the FILE record buffers would likely result in type mismatches etc. For this reason it may be advantageous to access the results directly using;

Command Description
FLUSH(File)
FLUSH(View)
Clears the memory used by the result set.
RESULTGET(File,Row)
RESULTGET(View,Row)
Copies a row from the result set into the file record.
Use this to explicitly set the row number to a specific value in preparation for calls to RESULTPREVIOUS and RESULTNEXT.
RESULTGET(File,Row,<Column>,<Null>)
RESULTGET(View,Row,<Column>,<Null>)
These commands allow direct access to a cell in the result set. They return the contents of the cell. If the cell in the result set is NULL then an empty string is returned and the Null parameter (if supplied, as a *Byte) is set to true.
RESULTNEXT(File)
RESULTNEXT(View)
Increments the Row number, and moves that row of data from the result set into the record. If there are no more records the ErrorCode() is set to BadRecErr (33).
RESULTPREVIOUS(File)
RESULTPREVIOUS(View)
Decrements the Row number, and moves that row from the result set into the record. If the Row number is less than 1 then the ErrorCode() is set to BadRecErr (33).
RESULTROW(File)
RESULTROW(View)
File/View{prop:ResultRow}
Returns the current Row number which was last copied to the record.
RESULTRECORDS(File)
RESULTRECORDS(View)
File/View{prop:ResultRecords}
Returns the number of rows in the result set.

The results buffer is populated by a number of commands, but the most common usage would be with a SET / NEXT or SET / PREVIOUS pattern. Setting prop:Paging = false  would allow a single SET / NEXT to retrieve the whole result set. The commands ResultGet, ResultNext, ResultPrevious move a row from the result set into the file buffer(s). Since at this point the result set is already in memory, it behaves at memory related speeds.

The result set is cleared on a SET command. It may also be cleared on a NEXT or PREVIOUS command (if the SET is in paging mode, and a new page is about to be fetched.)

Typically it's "somewhat faster" to let the driver clean up the memory as it needs to, however if you retrieve a large result set, and you have finished it, and wish to free the memory it used, then call FLUSH.

The number of records in the result set (or in this page of the larger result set) can be obtained using the RESULTRECORDS command. The current position in the result set (or in this page of the larger result set) can be obtained using the RESULTROW command.

Unstructured Data

Unstructured Data is an overloaded term, and can mean different things in different contexts. The following contexts have an impact on the way you use unstructured data, and the perfomance implications of each approach.

Client making an adhoc request that does not match a FILE or VIEW structure

In this case the program makes use of the SQL function. It passes a custom SQL command to the server and gets a response. This might be an adhoc SELECT, or a call to a Stored Procedure, or whatever. The important thing though is that this request does not match up to any FILE or VIEW structure in the program.

In the past a generic structure was used to hold the results from such a call. Typically it was a QUEUE with some fixed number of fields, where the number and size of those fields were (arbitrarily) large enough to hold the returned result set.

Object based drivers offer two alternatives to this approach;

Firstly, the result set is automatically retrieved and stored in a Result Set object for you. This object can then be queried directly using the RESULTGET functions which allow you to specify a specific cell in the result. For example;

SQL(Invoice,'select count(*) from Invoice where Paid=1;')
ans = ResultGet(Invoice,1,1)


A second approach is to turn the result set into a single JSON object, or array, and return that to the program as a large string. The programmer can then feed that string into their favourite JSON parsing class, and work directly on the JSON tree.

Debugging

Introduction

When your code does not give you the result set you wanted, it usually means you've done something wrong. However discovering what that mistake is can be frustrating because your code goes through the driver, and morphs into something (perhaps not what you intended) in the request to the back end.  When that database is SQL then debugging the issue is not as simple as say firing up TopScan to look at the table.

The drivers provide a number of ways to give clarity as to what exactly is happening with the command. Understanding these features is a key part to quickly, and effectively, identifying your mistake and moving forward.

Properties

A number of properties exist, which can be queried immediately after the call. These are the easiest things to check first, and usually show the problem immediately. In the case of a SET these can be examined after the NEXT or PREVIOUS command, and apply to both a FILE and a VIEW.


Property Description
prop:GroupBy
prop:Having
prop:OrderBy This is the ORDER BY Clause in the SELECT statement that's sent to the database. It is constructed of the key fields (if SET(key) is used), the ORDER attribute of the view (aka Prop:Order), and Prop:SqlOrder. The order is also supplemented with the Primary Key fields of all tables in the View, or the Primary key fields of the File.
prop:Project
prop:Where
prop:Sql This is the entire SQL statement. Assigning this into a string allows you to immediately examine the SQL statement generated by the driver. Note that some file driver commands (like CREATE) end up generating multiple SQL commands. In this case the prop:Sql returned is a semi-colon separated list of commands.
str.SetValue(Customers{prop:Sql})
or
s = Customers{prop:Sql}

Explain

SQL databases support a command called EXPLAIN which allows the database to pass information back to the Clarion program which "explains" the method the server used to build the result set.

The output varies wildly from one database engine to another, and some familiarity with the output from your preferred database engine is necessary in interpreting the result.

When prop:Explain is set before a command, then the driver will ask the server for an explanation, but then will also go ahead and execute the command. So the program will continue working.

Logging

File Drivers allow loggng. This allows the communication between the program, and the database to be logged, and inspected, which in turn can help isolate ineffiencies and errors.

Alert: Logging data flowing between the program and the server can be a security risk, because that data is logged in plain text. Therefore if at all possible, do not log data. Logging commands is not a problem, but logging data "leaks" that data out of the database.

Traditionally, the output of the logging can go to one of two places; either to DebugView++ (or equivalent) or a TEXT file on the disk.

Traditionally the option to turn logging on and off can be done either through an INI file interface, or via code in the program. The INI interface is highly convenient (and can be set an unset using the TRACE.EXE program.)

Custom Field Types

Introduction

Object based drivers allow a custom class to be assigned to a field. This allows for ad-hoc support for database field types which may not exist in Clarion, or be natively supported by the driver.

For example, MsSQL databases can make use of the GUID data type. This is a 16 byte binary string type. You can just import it into your dictionary as a 16 byte string, but since it is binary it's not terribly useful like that. It would be more useful in the dictionary defined as a 36 byte text string, consisting of the binary value hex-encoded, plus some separator hyphens. Custom Fields allows you to create this alternative representation of the field, and acts as the "encoder / decoder" between the Clarion program, and the database.

Of course this technique is not limited to GUID fields. That's just an example. This technique can be used to translate from any field type in the database to a suitable field type in the dictionary.

Importantly these custom fields exist in your program, not in the driver itself. So they act as a way for the program to "extend" the driver, without having to adjust the driver code. If the driver supports this field type natively, then there's no need to make use of this custom field interface. (As it happens, the drivers support GUID fields natively, thus rendering this example moot.)

A custom field class implements the iDriverField interface, which is declared in CWDriver.Int . Examples of custom field types exist in DriverFields.Inc and DriverFields.Clw. As mentioned above these files are not compiled into any driver, they are compiled into your program.

Using a Custom Class

  1. In the dictionary declare the variable (typically a string) large enough to contain the program version of the field. In the GUID example, the dictionary string would be a String(36) or a Cstring(37).
  2. Add an Include to the global embed point, After Global Includes, to include your class.
      Include('DriverFields.Inc'),Once
  3. If safe to do so, declare a global, threaded, object of the custom class. In some cases (like in the example DriverFields.Inc) these classes are re-entrant, and thread safe, so you can declare a singleton of them in the INC file. In the DriverFields.Inc file an object (DriverGuid) of the class DriverGuidClass is created for you.
  4. Assign the interface, from the object, to the field.


    Note: This affects ALL instances of this FILE structure, on all threads, for the life of the program 

  5. Use the new command SETCUSTOMTYPE

      SetCustomType(Customers,'Cus:Guid',DriverGuid.iDriverField)

    The second parameter can either be the field name (in quotes) or the field number.

    The third parameter is the iDriverField interface, as implemented in your custom class.

Creating a Custom Class

  1. Create an INC file, and declare a class that (at least) implements the iDriverField interface;
      map
      end
     
      include('CWDriver.Int'),once                                             

    DriverGUIDClass  Class, Implements(iDriverField), Type, MODULE('DriverFields.Clw'), LINK('DriverFields.Clw',1), DLL(0)
                     End

    ! if the class has no properties, then a single global object (singleton) can be used for multiple fields
    DriverGuid   DriverGuidClass
  2. Create a matching CLW which fleshes out the interface methods;

    !-------
    DriverGUIDClass.iDriverField.GetFieldType   Procedure()!
    ! return a unique text identifier for the field type
      code
      return 'GUID'
    !-------
    DriverGUIDClass.iDriverField.FromRecordField  Procedure(Byte pFieldType, Long pFieldAddress, Long pFieldLength, Long pWriteAddress, Long pWriteLength)
      code
      ! your code goes here, this moves the field from the Clarion record structure, into the format as it will be stored in the database
      return 16  ! the number of bytes in the output

    !-------
    DriverGUIDClass.iDriverField.ToRecordField   Procedure(Byte pFieldType, Long pFieldAddress,Long pFieldLength, Long pSrcAddr, Long pSrcLength)
      code
      ! your code goes here. This moves the data from the database format into the format declared in the FILE structure (ie in the Clarion Dictionary.)
      return self.iDriverField.GetSourceSize()

    !-------
    DriverGUIDClass.iDriverField.ClearRecordField  Procedure(Byte pFieldType, Long pFieldAddress,Long pFieldSize,String pValue)
    str  string(16)
      code
      ! your code to clear the field goes here - eg str=''
      Return self.iDriverField.ToRecordField(pFieldType,pFieldAddress,pFieldSize,address(str),size(str))
     
    !-------
    DriverGUIDClass.iDriverField.GetSourceSize  Procedure()
      code
      Return 16 ! The byte length of the data in the database
    ! ---------
    DriverGUIDClass.iDriverField.GetSQLType    Procedure() !,STRING
      code
      return 'CHAR(16)'

The complexity of the code in these methods depends on the complexity of the field structure in the database, and the difficulty (or simplicity) of converting that to and from the Clarion record structure.

Considerations

Calling CLEAR(File) is a file driver function. This is good. This call goes to the file driver, which in turn can then clear the file record. It does this on a field-by-field basis - meaning that custom fields can be cleared with the interface ClearRecordField method.

By contrast calling Clear(Fil:record) is a simple call to CLEAR a GROUP. This call does not go through the driver. In this case the Record fields will be cleared as if they are simple Strings, or Longs etc.

Also note that (despite the Clarion documentation) in Clarion it is not possible to CLEAR(Table,n). The CLEAR interface method includes this parameter in the iDriverField interface in case this changes in the future, but for now the parameter is always 0.

Writing a Driver

Drivers Included

  1. DOS2

    An example of a flat-file driver. This driver mimics the DOS file driver, with some additional functionality.
    Experimenting with this driver is a good starting point for understanding how to build, register, update and deploy a driver.
  2. SQLite2

    An example of a SQL driver. This implements the SQLite database using the SQLITE3.DLL.
  3. InMemory2

    This is an example of a Derived driver. This driver derives from the SQLite2 driver.

Build Utilities

There are two parts of the build process which are somewhat different to a regular Clarion project build. This section explains them, and also explains the utilities that are used to overcome them.

Export Files / Pre-Build Command

The first issue is simple, and easily solved. When creating classes in hand-code, and then pre-compiled into a DLL, an EXP file needs to be maintained so that those classes correctly export the methods in the class. In an APP system these EXP files are generated for you (and you may not even be aware of them) but when there's no APP they have to be maintained manually.

Since maintaining these files by hand is a pain, a utility is provided (in \clarion\accessory\driver\obdexport) called ObdExport.Exe. This is a command line program which reads in an INC file and creates a EXP file for you. As it is a command line program it can be called automatically as part of the build process. For the included drivers a (different) BAT file containing a call to this utility is included in the \clarion\accessory\driver folder.

For example, for the DOS2 driver, the file ExportDos2.Bat exists. This has been added to the CLADOS2.cwproj file as a Pre-Build Event Command Line.
Because pre build commands line start in the \Clarion\bin folder (not the BAT file location) the Pre-Build Command Line is set to

$(ProjectDir)ExportDos2.Bat

Inside the BAT file are two lines of code;

cd /d "%~dp0"
..\bin\obdexport.exe -i=.\Source\Dos2DriverClass.Inc -e=.\Source\ClaDos2.Exp -l=CLADOS2 -b=7E0000H -d=DOS2


The first line changes the "current drive and directory" to the BAT folder (typically \clarion\accessory\driver).
The second line calls the ObdExport program, which is in the \clarion\accessory\bin folder.

The options for the ObdExport program are as follows

Command Line Switch Meaning
-i The name of the INC file to parse
-e The name of the EXP file to create.
-l The Library name. This is used in the first line of the EXP file as
LIBRARY 'whatever' GUI
-b The Image_Base of the library. This parameter is optional. It should be unique for each DLL you create.
-d If the DLL being created is a File Driver, then it needs to export two functions first (for the IDE to use.) This option specifies the name of the driver. It is not used for the general DLL's (CLASTR and CLAOBD).

When ObdExport runs it sends information to DebugView++, so you can see it is working correctly. This is also helpful to understand where it may be going wrong if there is a problem.

The source code for ObdExport is in \driver\obdexport, however you should not need to recompile this utility.

Patching / Post-Build Command

The second difference is very technical[1], and again a utility is provided to resolve it. the utility is called ObdPatch.exe, and again a BAT file is in the driver folder to call it for each driver. The BAT file is called from the project, as a Post-Build event Command Line.

PatchSQLite2.Bat

Inside the file are two lines of code

..\bin\obdPatch.exe -d=clalit2.dll -c=.\source\SQLite2Driver.Clw -w=1  
copy .\clalit2.dll ..\..\bin


The first line calls the ObdPatch.Exe and the second copies the DLL to the \clarion\bin folder.

The options for the ObdPatch program are as follows

Command Line Switch Meaning
-d The name of the DLL file to edit
-w If set to 1 then the new address is written directly into the DLL file
-c The name of the CLW file containing the TDescAddress equate. If this option is set then the CLW file will be updated.

When ObdPatch runs it sends information to DebugView++, so you can see it is working correctly. This is also helpful to understand where it may be going wrong if there is a problem.

The source code for ObdExport is in \driver\obdpatch, however you should not need to recompile this utility.

Note 1: The technical part is not really interesting. But essentially the DLL needs to have an address hard-coded into it at compile time. The compiler cannot do this, so it's done by the utility after the DLL is compiled. And the source code is updated with the address. In the source code the address is an equate;

TDescAddress   Equate(007e7140h)

GPF's

Due to some cleverness by the compiler, some code in the driver will not GPF in the usual way[1]. Rather, if the code in the driver GPF's then the call will terminate at that point, and the code will continue to run from the next line in your program. The ErrorCode() will be set to 90 (File System Error)   and FileError() will return the stack trace.

This is a fantastic feature because it isolates your program from GPF's caused in external libraries which are used by the driver (ie libraries supplied by the database supplier.) However it can make writing drivers challenging, because useful GPF's are suppressed.

For some kinds of errors, while constructing the driver, it may be advantageous to call the method directly (thus bypassing the GPF protection) in order to get an appropriate GPF, which will then lead to the source of the error.

Note 1: If the call is a new command, one not present for traditional drivers, then it will likely GPF as normal. However if it's property related, or a command used by traditional drivers, then it won't GPF.

Object Based Drivers

Drivers based on these classes are colloquially known as object based drivers. This is because they are built using a set of base classes which in turn makes building a driver for a specific database easier, requiring only the code that makes that database unique.The following base classes are provided in DriverClass.Clw, and defined in DriverClass.Inc.

Interfaces are declared in CWDriver.Int.

Class Purpose
DriverBaseClass Used to share functionality between the FILE classes and the VIEW classes. Not usually used directly.
DriverFileClass A class which contains common driver functionality. Derive from this class if you are making an ISAM (TPS-Like) or Flat-File (DOS-Like) driver.
DriverFileSQLClass Based on the DriverFileClass, but with SQL specific code. Derive from this class if you are making a SQL driver.
DriverResultSetClass SQL drivers have the ability to hold a result set (client side) which can then be iterated through using NEXT and PREVIOUS. This result set is stored by a separate class, and access to this result set is accessed through the iResultSet interface. The DriverResultSetClass is a default implementation of this interface, however it's possible to replace this class if necessary (perhaps to improve performance, or whatever.)
Interface Purpose
iDriverField
This interface allows for custom field types to be added at the application level, external to the driver.
iResultSet
To avoid the need for cursors on the SQL side, result sets can be moved to the client side and stored in a ResultSet. This result set can then be iterated on using NEXT and PREVIOUS. Note that the size of this result is usually limited to the FetchSize. The FetchSize can be set by the first parameter to the BUFFER command.

As with the custom field types, the Result property in the class can be set externally to the driver to any implementation of the interface.

Deriving a File Driver, and applying it to a FILE Structure

In most situations the supplied object based driver will be sufficient for your programming needs. This driver has been registered and you use it like any other driver.  However since the driver is supplied as source code, and as a class, it's possible to inspect the code the driver is using, and, if you like, derive that class with your own class,  and set one (or more) tables to use your derived class.

For example, say you wanted to add functionality to the DOS2 driver, overriding the ADD method(s). This is a straight-forward process.

1. Create a Derived Class

Derived classes consist of two files, and INC file and a CLW file. These files belong in your \Clarion\Accessory\Drivers folder.

While the nuances of creating classes is beyond the scope of this document, a simple INC file looks like this;

      include('Dos2DriverClass.Inc'),once
DosReadOnlyClass CLASS(DriverFileDos2Class), Type, MODULE('DosReadOnlyClass.Clw'), LINK('DosReadOnlyClass.Clw', YourLinkMode), DLL(YourDLLMode)
ADD                PROCEDURE(),DERIVED
ADD                PROCEDURE(LONG pLength),DERIVED
                 END


Then in the CLW you flesh out the methods;

  member()
  map
  end 
  include('DosReadOnlyClass.Inc'),once
  include('errors.clw'),once

!-----------------------------------------------------------------
DosReadOnlyClass.ADD              PROCEDURE() 
  Code
  Self.SetError(NoDriverSupport)

!-----------------------------------------------------------------
DosReadOnlyClass.ADD              PROCEDURE(LONG pLength) 
  Code
  Self.SetError(NoDriverSupport)

2. Use the derived class in your program

In global Includes

  include('DosReadOnlyClass.Inc'),once

In your project settings
YourLinkMode and YourDLLMode need to be set appropriately.

Declare an object. Put it in the same scope as your FILE structure. (So usually GLOBAL).

ImportReadOnly   DosReadOnlyClass

At a suitable point in your code (presumably before you start using the file, or at the moment when you want to switch it to the new derived class)

Import{prop:Object} &= ImportReadOnly

In the above line Import is the label of the FILE structure.

If the FILE is threaded (,THREAD) then your object should also be ,THREAD and this assignment needs to be done on a per-thread (when desired) basis.

Creating a New Driver

The following are a set of tips for writing a new driver.
  1. Your driver class will derive from either an existing driver, or the supplied DriverFileClass and DriverViewClass classes.
  2. Starting from a "blank" driver is useful, the DoNothingDriver is recommended. It contains the basic boilerplate code (calls to clear the error, call the parent, and so on.  todo comments assist in identifying the places where you will need to write code. Edit these files - give them a new name, new classname, new LinkMode and DLL Mode
  3. Copy in the Driver.Clw file, and rename and edit appropriately.
  4. If you derive any of these methods you MUST call the parent method AFTER your code (regardless of whether they worked or not);
    CLOSE, GET, NEXT, PREVIOUS, REGET.

    Note: Even if you don't support some of these operations, or some flavors of the operations, then you likely DO want to derive these methods, calling the parent first, and then setting
    Self.SetError(NoDriverSupport)
    Since these parent methods do not set this error.
  5. If you derive any of the following methods you MUST call the Self.PreRead(OpCode) method BEFORE your code.
      GET , REGET, NEXT, PREVIOUS.If Errorcode() is set after this call then you should abort the read.
  6. If you derive any of the following methods you MUST call the parent method BEFORE your code;
    OPEN, PUT , ADD, APPEND, DELETE, SET, RESET . You should abort the call if the parent method sets the ErrorCode().

    Note: Even if you don't support some of these operations, or some flavors of the operations, then  you likely DO want to derive these methods, calling the parent first, and then setting
    Self.SetError(NoDriverSupport)
    Since these parent methods do not set this error.
  7. If you derive any of the following methods you MUST call the Self.AfterWrite(OpCode) method AFTER your code even if the write is not successful, and an error is generated;
     PUT , ADD, APPEND, DELETE.
  8. Your new class needs a copy of the ReplaceObject method. Copy this from DriverClass.Clw. Change the type of the old field so that it uses your class name, not the DriverFileClass name. That's the only change required. Do not call the parent method.

    Dos2DriverClass.REPLACEOBJECT          PROCEDURE(Long pOldObjectAddress)
    old    &DriverFileDos2Class
      Code
      If pOldObjectAddress <> 0
        old &= (pOldObjectAddress)
        self :=: old
        dispose(old)
      End 
      self.ClaFCB.rblock = address(self) 
      Return
  9. Derive the GetPROP:Driver method to return the name of your driver.

    DriverFileDos2Class.GETPROP:Driver         PROCEDURE()!,STRING
      code
      return 'DOS2'

Compiling a Driver

Compiling the drivers is fairly straightfoward, but there is a wrinkle.

The IDE, and specifically the Dictionary Editor, reads the DLL to determine the supported data types and so on. To do this it doesn't "Load" the DLL in the usual fashion, but rather reads the DLL directly.  One of the values it reads is the DriverGroup structure, and specifically the tDesc property in that group, which is initialized to an address.

The compiler though cannot prime groups with addresses. To solve this problem a small utility program (ObdPath.Exe) exists. This utility inspects the DLL, and tweaks the DLL file so that the address is set correctly. This utility is therefore called at the end of the compile to tweak the DLL file, to set the correct value into the DLL file.

Additionally the utility will update the CLW file, by setting the TDescAddress equate. Although this doesn't make much difference for this compile it means that the value is set correctly for the next compile. It's worth noting that the value can potentially change from one compile to the next (if the source code changes.)

Example;

c:\clarion111\bin\obdPatch.exe -d=cladnd.dll -c=DoNothingDriver.Clw -w=1

Commands Summary

In the context of Deriving classes, and deriving methods, there is some parent code which is attached to each method . This table acts as a reference for deriving these methods.

Method Required
File
State [1]
When is Parent called Notes
ADD Open Before derived code Call parent first and abort if error.
Derived class must call AfterWrite method after derived code.
ADDRELATION Either
Handled in DriverFileClass, so no need to derive.
APPEND Open Before derived code Call parent first and abort if error.
Derived class must call AfterWrite method after derived code.
BINDFIELD

Handled in DriverViewClass, so no need to derive.
BOF Open Don't call  parent No code in base class. Defaults to NoDriverSupport (80) Error.
Derived class should call _checkopen, and abort if error.
BUFFER Open Before derived code Call parent, then , and abort if error.
BUILD Either Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
BYTES Either Before Derived Code SQL Drivers don't need to derive this method, the DriverSQLClass contains the code.
CLEARFILE Either
Handled in DriverFileClass, so no need to derive.
CLEARPROPS_ Either As Appropriate
CLOSE Either After derived Code
Derived code should Self.SetError(NoError) at start. If called when the FILE is already closed, then it has the effect of clearing all the object properties and buffers.
CONNECT Closed Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
COMMIT Open

COPY Either Don't call  parent Derived code should Self.SetError(NoError) at start.
No code in base class. Defaults to NoDriverSupport (80) Error.
CREATE[2] Closed Before derived code Call parent and abort if error.
CREATEDB Closed Before derived code Call parent and abort if error.
DELETE Open Before derived code Call parent first and abort if error.
Derived class must call AfterWrite method after derived code.
DISCONNECT Either Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
DUPLICATE() Open
No need to derive. Calls Duplicate(key) for each key in turn.
DUPLICATE(key) Open Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
EMPTY Open Before derived code Call parent and abort if error.
EOF Open Don't call parent
No code in base class. Defaults to NoDriverSupport (80) Error. Defaults to returning false.
EXCEPT


EXISTS


EXPORT


FROMQUEUE


FLUSH Either Before derived code Called by CLOSE. 
FREESTATE Either
Drivers don't need to derive this method, the DriverFileClass contains the code.
GET Open After derived code Derived class must call BeforeRead method before derived code, and abort if error.For forms of GET which are not supported do not derive, the parent class will generate a NoDriverSupport (80) Error.
GETNULLS Either
Drivers don't need to derive this method, the DriverFileClass contains the code.
GETRANDOM


GETSTATE Either
Drivers don't need to derive this method, the DriverFileClass contains the code.
HOLD Open Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
INTERSECT


JOINTABLE


LOCK Open Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
LOG


LOGGINGON
LOGGINGOFF



LOGOUT Open

LOGOUTCONNECTION


MERGE Open Don't call Parent Derived code should Self.SetError(NoError) at start.
No code in base class. Defaults to NoDriverSupport (80) Error.
NAME Either
Drivers don't need to derive this method, the DriverFileClass contains the code.
NEXT Open After derived code Driver must derive this method, if only to call BeforeRead. Derived class must call BeforeRead method before derived code, and abort if error. 
NOMEMO Open
Drivers don't need to derive this method, the DriverFileClass contains the code.
NULL Either
Drivers don't need to derive this method, the DriverFileClass contains the code.
OPEN Closed Before derived code
Call parent and abort if error.
If successful derived code must set self._opened = true .
PACK Either Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
SQL drivers should implement VACUUM here.
POINTER
POINTER64
Open Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
If driver supports this, should call If self._CheckOpen() = 0 then Return 0.
Note that POINTER does not clear the errorcode().
POSITION Open
Drivers don't need to derive this method, the DriverFileClass contains the code.
PREVIOUS Open After derived code Driver must derive this method, if only to call BeforeRead. Derived class must call BeforeRead method before derived code, and abort if error. 
PROJECTFIELD


PUT Open Before derived code Call parent first and abort if error.
Derived class must call AfterWrite method after derived code.
RECORDS
RECORDS64
Either Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
REGET Open After derived code SQL Drivers do not need to derive these methods, code is in DriverSQLClass.
Derived class must call BeforeRead method before derived code, and abort if error.
If driver does not support keys, then derive REGET(key,str) to set  NoDriverSupport (80) Error.
RELEASE Open Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
REMOVE Closed Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
RENAME Closed Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
REOPEN Either
Examine and decide
Examine the parent to decide if it is applicable or not.
RESET Open Before derived code
Code in base classes may be sufficient as it routes through _SET.
RESTORESTATE Either
Drivers don't need to derive this method, the DriverFileClass contains the code.
ROLLBACK Open

SAVEPOINT


SEARCH


SEND Either

SET Open Before derived code Class probably doesn't need to derive this method. Every form flows through _Set.
Derived class will likely just derive the _Set method. Call the parent._set first.
SETNONNULL Either
Drivers don't need to derive this method, the DriverFileClass contains the code.
SETNULL Either
Drivers don't need to derive this method, the DriverFileClass contains the code.
SETNULLS Either
Drivers don't need to derive this method, the DriverFileClass contains the code.
SHARE Closed
Drivers don't need to derive this method, the DriverFileClass contains the code.
SHARE is the same as OPEN, but with a different default access mode.
SKIP Open Before derived code SQL Drivers don't need to derive this method, the DriverFileClass contains the code.
SQL


STATUS Either
Drivers don't need to derive this method, the DriverFileClass contains the code.
STREAM Open Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
SUBSET


TOQUEUE


UNLOCK Open Don't call parent No code in base class. Defaults to NoDriverSupport (80) Error.
UPSERT Open Before derived code Call parent first and abort if error.
Derived class must call AfterWrite method after derived code.
UNION
UNIONALL



WATCH Open
Drivers don't need to derive this method, the DriverFileClass contains the code.

Notes

[1] : Some of these deviate from documented requirements for Traditional Drivers. And different object based drivers may impose different requirements. See driver documentation for more details.

[2] : Command is CREATE. Method name in the class is CreateTable.

DriverName.Clw

The source to the file driver consists of 3 files. The two class files (.INC and .CLW) derived from DriverFileClass, and the "glue" module which acts as the link between the file driver commands (as compiled by the compiler) and the class. It also contains the entry points called by the IDE when it is making use of the driver.  The name of this CLW file is unique to each driver. For example for the DOS2 driver, this file is called Dos2Driver.Clw.

Each driver has it's own module, and it has to be constructed very carefully. Mistakes in this module can be difficult to debug, and will also make the DLL behave strangely when used by the IDE. Carefully work through the following steps to create your own DriverName.Clw file.
  1. Copy an existing file to your new driver folder. Name the file appropriately (to match the driver you are making.) In these instructions the assumption is that you copied from the Dos2Driver.Clw file.
  2. Adjust the first set of Equates. More details about the components of this set are described below.
  3. Replace the DriverFileDos2Class name with your driver class name. So, for example, if your class is say DriverFileRedisClass then you can do a search-and-replace replacing DriverFileDos2Class with DriverFileRedisClass. There will likely be 5 or 6 instances of this name (depending on the comments.)
  4. Determine Driver Functionality.

    This step is the most critical because it requires changes in two places that MUST match. The DRIVER_HAS equates need to be set to YES or NO. To simplify things there are a set of equates which broadly-speaking cover functionality. For example DRIVER_HAS_KEYS can be set to either YES or NO, which in turn then cascades to all the equates that are set to equal DRIVER_HAS_KEYS.

    However this is only one half of the puzzle. The TypeDescriptor field has to be set very carefully to match the equates. Items that are NO should have a pipe ( | ) symbol at the start of the line. Items which are YES must not. So, for example, in the DOS2 driver BLOBS are not supported, and so

    DRIVER_HAS_BLOBS                    Equate(NO)

    The Beginning of the TypeDescription then becomes;

    TypeDescriptor  String( '' | 
                     & chr(NUM_DRIVER_OPS)         |
                     & chr(Opcode:ADD)             | 
                     & chr(Opcode:ADDfilelen)      |                   
                     & chr(Opcode:APPEND)          |  
                     & chr(Opcode:APPENDlen)       |  
                     |& chr(Opcode:BLOB_DOPROPERTY) | 
                     |& chr(Opcode:_BLOB_SETPROPERTY) |  
                     |& chr(Opcode:_BLOB_GETPROPERTY) |  


    Notice that the equates which are set to NO, also have a leading | at the front of the line. Items which are YES (like say Opcode:ADD) do not.

    TIP: The DoNothingDriver files have most of the functionality turned on. You can leave it as-is to start with. Then later on, when you decide specific commands are not supported you can return here and turn them off.
  5. Determine Driver Types. In the file are a list of equates, each set to YES or NO, which determine if a specific Data Type is valid for the driver or not. This list starts with;

    DRIVER_HAS_ANY              Equate(NO)  !
    DRIVER_HAS_BFLOAT4          Equate(NO) !  /* IBM Basic 4 byte float             */
    DRIVER_HAS_BFLOAT8          Equate(NO) !  /* IBM Basic 8 byte float             */
    DRIVER_HAS_BLOB             Equate(YES) !  /* Blob                               */
    DRIVER_HAS_BYTE             Equate(YES) !  /* Signed 1 byte binary               */
    DRIVER_HAS_CSTRING          Equate(YES) !  /* C string (trailing null)           */


    For each type determine if you want the driver to support it or not. Then set each one to YES or NO.

    As before, this list MUST match the list inside the TypeDescriptor.  For each type supported make sure the line does not start with a Pipe symbol. For ever type which is NOT supported it must start with a pipe symbol. So, in the Type description the first few Type lines are;

                    & chr(NUM_DRIVER_TYPES) |
                |    & chr(ClaANY)          |
                |    & chr(ClaBFLOAT4)      |
                |    & chr(ClaBFLOAT8)      |
                   & chr(ClaBLOB)           |
                   & chr(ClaBYTE)           |
                   & chr(ClaCSTRING)        |


    and so on.
  6. The DriverGroup should be named, and populated for your driver. The fields in this group are explained below.

Name Equates

The first block of equates (between the PROGRAM and MAP commands) deal with the name of your driver. These equates are discussed here.

  • ShortName   Equate('DOS2')  
    The shortname is used in various places, like as part of DLL names, but typically is just 3 or 4 characters long. The length comes into play because the DllName and DsiDllName equates are fixed length (13 bytes) and include the shortname. So The easiest is to use a 4 char name, otherwise you need to adjust those equates as well.
  • LongName    Equate('DOS2Driver')
    A longer form of the name if desired. Spaces are not allowed, and the name must start with a letter. The long name is the name used in the Dictionary droplist when selecting a driver. It's also the name of the Pipe function, as exported by the DLL. Changing the LongName requires that you change the EXP file as well to match.
  • DriverName  Equate(LongName & '<0>{10}') 
    The DriverName is used in the code when declaring a FILE structure. This equate must be exactly 20 characters long.
  • Copyright   Equate('(c) 2024 by CapeSoft<0>{20}')
    A copyright message. This must be exactly 40 characters long.
  • DriverDesc  Equate('DOS2 (Binary)<0>{17}')
    The driver description. This must be exactly 30 characters long. This appears in the Registered File Drivers list.
  • DLLName     Equate('CLA'&ShortName&'.DLL<0>')
    The conventional DLL name for a file driver, consisting of CLA then the ShortName equate., followed by .DLL. This length must be exactly 12 characters.
  • DSIDLLNAME  Equate('CLA'&ShortName&'S.DLL')   
    The conventional DSI (Data Source Import) name for a file driver, consisting of the DLLName with a trailing S. This is a separate DLL for importing the file structure from a database into the dictionary. DSI Dll's are optional, and are not covered by the classes at this point.
  • TDescAddress Equate(007EC7B4h)
    This is the address of the TDescAddress structure in this DLL, after it has been compiled. This has to be set at compile time because it's used to register the DLL, and the DLL is not "loaded" during that registration - but rather read directly as a DOS Binary file. This address can be gotten by using the CHECKDRIVER.EXE utility (yet to be written, am using PE Explorer in the interim.)

Driver Group

This group is a data structure which is set at compile time, and exported from the DLL. It is used when the driver is registered into the IDE (a process that happens without loading the DLL). For this reason it has to be set very precisely, and care has to be taken not to change the length, or order, of any fields in the group.
  1. raw_name
  2. dll_name
  3. copywrite
  4. desc
  5. pipeFunctionAddress
  6. drvattr
  7. clastrlen
  8. cstrlen
  9. reclen
  10. clastrlen32
  11. cstrlen32
  12. reclen32
  13. maxdims
  14. maxkeys
  15. maxmemos
  16. memosize
  17. memosize32
  18. fldsize
  19. kcbsize
  20. mcbsize
  21. fcbsize
  22. tdesc
  23. dsi_name
  24. driverMeta

Pipe Functions

When a file driver command is executed, the compiler makes a call to the Pipe function passing various parameters. The pipe function is named uniquely for each driver, and contains only a small amount of code. Mostly the function passes the call through to the underlying object (which has a Pipe method.)

A second Pipe function is provided for View calls.

Finally a function is provided which creates, and binds an object to the FILE structure.

Apart from the global naming replacements (described above) no changes to this code are required.

Driver Properties

The Clarion language defines a number of commands which are passed through to the file driver for processing. Commands like ADD, SET, NEXT and so on are a database independent way for the code to access the data. These commands are "known" by the compiler, and compiled in a specific way, which in turn means the commands are routed to the driver Pipe method, and from there into the driver object itself.

In addition to the commands a property syntax was introduced. This allows a generic way to get information from the driver, to push information into the driver, and to get the driver to execute code. Property syntax exists in three forms, which in turn translate into 3 methods in the driver;

Code Method
something = file{prop:someprop}
something = file{prop:someprop,parameter}
GetProperty(String pProperty,Long parameter=0)
file{prop:someprop} = something
file{prop:someprop,parameter} = something
SetProperty(String pProperty,Long parameter=0,String pValue)
file{prop:someprop}
file{prop:someprop,parameter}
DoProperty(String pProperty,Long parameter=0)

In all cases the parameter is optional, and it is a LONG value. This is usually (but does not have to be) the field number in the file structure.

Not immediately obvious though is that the prop:someprop value is a STRING. All the declared properties are number Equates, however this is not a limitation. String properties can just as easily be passed;

something = file{'sometext'}

Naturally it is up to the driver to add code for each property, or text, that it wishes to support. Most of the properties are driver independent, and so are handled in the base classes, however new drivers can use this property syntax to implement functionality that may not be exposed in other drivers, or which are custom to a specific driver.

DOS2 Driver

The goal of the DOS2 driver is twofold. Firstly it acts as an example flat-file driver. It provides a good starting point for other flat-file drivers, and can be derived from. For example the ASCII2 and BASIC2 drivers could derive from this driver.

It is however also a full-featured drop-in replacement for the DOS driver, which contains a few enhancements over the current DOS driver. It is designed to be backward compatible with the existing driver, except for the items mentioned below

Headline Differences

Code Old Behavior New Behavior
On-Disk Files > 2 gigs in size Inconsistent support up to 4 gigs, then no support. Supported
BOF Command not Supported Supported
BUFFER
BUFFER(-1,MaxMemoryInBytes)
Command not Supported Supported
BYTES(table)
table{prop:Filesize}
Not supported if file is closed. Error set to File Not Open (37). Supported
EXISTS
Can call Exists(File)
POSITION Returns a 4-byte string Returns an 8 byte string
RECORDS Requires the FILE to be Open. Error set to File Not Open (37). Works if the file is Open or Closed.
SKIP Command not Supported Supported. Note that SKIP jumps by Disk Record Size NOT Bytes. To Jump by Bytes use SET(FilePointer) instead.

BIGINT

BIGINT is a new data type to hold a large integer. It is declared as an EQUATE in CwDriver.Inc.
By using BIGINT, source code will be backwards compatible with older Clarion versions (ie Clarion 11, 10, 9 etc). It can be used in code, however should be avoided for on-disk storage.

Currently BIGINT equates to a REAL which means it is a 52 bit integer. While this is smaller than the 64 bits offered by the Int64 type, it's big enough for disk files up to about 4.5 petabytes (about 4500 terabytes).

Example

i BIGINT
  Code
  i = ImportFile{prop:Filesize}


BUFFER

The traditional DOS driver does not support the BUFFER command. The object based DOS driver does support the command, and in some cases can speed up access to the file a lot. The key to using the BUFFER command effectively is to understand the way in which your program will be using the disk file.

When a buffer size is set, bigger blocks of the file are read from the disk into RAM. This improves performance in two use cases;

If you are sequentially processing a file from start to finish, then the performance increases as the number of reads decrease. In other words
the ideal approach is to slurp the whole file into RAM, and then process it from there. While this may be not possible (due to RAM constraints) setting the buffer size high would be beneficial.

If you know the file will fit into ram then set

BUFFER(somefile,Driver:BufferWholeFile)

If you want to allocate as large a buffer as possible, up to a maximum size, then use

BUFFER(somefile,
Driver:BufferWholeFile,MaximumSizeInBytes)

This sets a cap on how large the buffer can be.

Alternatively set the buffer size to some (large) number of records; BUFFER(somefile,1000)

If you are doing random reads on the file, jumping around forwards and backwards then having the file completely in ram will make a dramatic improvement.

If you can only buffer a portion of the file, and if the record you are reading is not in the buffer then a new buffer will need to be loaded. If the buffer size is large then this read takes longer than reading just a single record. Whether this is worth it or not depends on the number of times your following reads are already loaded (a cache "hit") versus the number of times it goes back to the disk (a cache "miss").

To free up all memory allocated to the buffer call
BUFFER(somefile,Driver:FreeBuffer)

Large File Support

The DOS2 driver fully supports large files, specifically files larger than 2 gigabytes, and indeed larger than 4 gigabytes. However some of the file commands make use of a LONG either as a parameter, or return value. For these commands an alternative command exists, which can either accept, or return, a larger number.

Traditional Command New Command Description
BYTES BYTES64 The BYTES command is limited to returning a LONG value. The BYTES64 command returns a BIGINT.
GET
GET64
The second parameter to the GET command is the position in the file to read from. The GET command limits this to a LONG value. The GET64 command accepts a BIGINT as the parameter.
POINTER POINTER64 The POINTER command is limited to returning a LONG value. The POINTER64 command returns a BIGINT.
PUT PUT64 The second parameter to the PUT command is the position in the file to write to. The PUT command limits this to a LONG value. The PUT64 command accepts a BIGINT as the parameter.
RECORDS RECORDS64 The RECORDS command is limited to returning a LONG value. The RECORDS64 command returns a BIGINT.
SET(file,pointer) SET64(file,pointer) The second parameter to the SET command is the position in the file to read from. The SET command limits this to a LONG value. The SET64 command accepts a BIGINT as the parameter.
SKIP SKIP64 The SKIP command is limited to skipping a LONG value. The SKIP64 command can skip a BIGINT value.

REGET

Reget(FILE File, String Position)

When doing a REGET using the DOS2 driver, a record from a previous position is retrieved from the disk file. The Position parameter is the value returned by an earlier call to POSITION.

Note however that REGET does not repeat the Length parameter which may have been passed to an earlier GET. In other words, when using REGET the whole record is read from the position identified by the Position parameter.

SQLite2 Driver

The goal of the SQLite2 driver is twofold. Firstly it acts as an example SQL file driver. It provides a good starting point for other SQL file drivers.

It is however also a full-featured drop-in replacement for the SQLite driver, and contains several enhancements over the current driver. It is designed to be backward compatible with the existing driver, except for the items mentioned below. For a full list of new functionality and behavior see the Files Index.

SQLite3.dll

SQLite does not make use of a database server, as other SQL databases do, but rather a single DLL file (SQLite3.dll) encapsulates all the SQLite functionality. This file is originally sourced from https://sqlite.org/download.html .

This file is very backwards compatible, and using a later version instead of an earlier version should not cause any complications.

As of writing this, the version required by the SQLite2 driver is 3.38.0 or later. If the program attempts to use an earlier DLL a MESSAGE will be displayed on program startup.

Headline Differences

Code Traditional Behavior Object Based Behavior
CREATE Supported, with limits. Supported with smarter defaults, and allows the developer more control.
NULLS Supported Supported. However the STRING returned by GETNULLS is a different format.

Database Design

This section discusses database design decisions that can impact space requirements or performance when designing for the SQLite database.


Page Size The length of the record in the dictionary needs to fit into the SQLite page size. SQL BLOB and TEXT fields are not included in this number.
The page size is set at the database level, and cannot be directly changed in an existing database. (A new database needs to be created, and the data copied.)
The default length is 4k, but a variety of options up to 64 K exist (8K, 16K, 32K, 64K).
SEND(anytable,'pragma_page_size',16')
SEND(anytable,'pragma_page_size',8192)

ADD

When adding rows to the table, STRING and MEMO fields (which do not have the BINARY attribute) are clipped before being stored.

CSTRING fields are not clipped.

PSTRING fields are stored "as set" - in other words they can have trailing spaces, but do not include padding spaces.

This behavior applies to all writes including PUT, APPEND and UPSERT.

This behavior can cause problems with the traditional SQLite driver, which expects strings to be a fixed length, and padded with spaces. Specifically doing a NEXT after a RESET may fail. This would only be an issue where the traditional driver, and object based driver are accessing the same tables in the same database file, which is not recommended.

CONNECT

For the SQLite driver, the CONNECT command can take an optional Options parameter.

This is passed to sqlite3_open_v2 as the flags parameter. It defaults to SQLITE_OPEN_READWRITE + SQLITE_OPEN_CREATE .

CREATE

One major difference in the SQLite2 driver over the traditional SQLite driver is the CREATE command. The traditional driver has a very simple CREATE command, which does not allow for much control over creating the table. The documentation goes so far as to say;

Although CREATE is supported, only basic use is recommended, because it cannot handle all of the features (i.e., constraints and relationships) that an actual SQL CREATE TABLE statement can support. Use of PROP:SQL is preferred in this case.

In contrast to this the object based SQL drivers implement CREATE, and by default create a table correctly. It also gives the developer a mechanism to control the specifics of the table creation.

Traditional Behavior Object Based Behavior
If the table exists in the database, and CREATE is called, then the table is dropped and a new one is created. If the table already exists in the database, then a File System Error (90) Error is set. If replacing the table is desired do a REMOVE(table) first.
CREATE INDEX could generate an error if the index already exists Create code for Indexes includes IF NOT EXISTS clause.
After call to CREATE, prop:sql returns only the SQL for creating the last index. The prop:SQL returned contains the complete CREATE TABLE, as well as any CREATE INDEX statements.
COLLATE NOCASE added to numeric columns in keys. COLLATE NOCASE only added to string type fields in keys (where key is not case sensitive).
Creating tables with TIME fields would fail. TIME data type supported.

COLLATE

When a STRING ,or CSTRING, or PSTRING, field is created in the database, then it can have a default Collation. This collation can be set specifically using the COLLATE(something) Extended Name Attribute.

If the field is set as BINARY then COLLATE BINARY is used.

If no COLLATE attribute exists, and it's not BINARY, and the string field is used in case-insensitive key, then it will be created as COLLATE NOCASE.

If no COLLATE attribute exists, and it's not BINARY, and the string is not used in a case insensitive key, and the field is a STRING (not a PSTRING or CSTRING) then it will be created as COLLATE RTRIM.

Files containing tables created with the SQLite 2 driver are likely to be a lot smaller than files containing tables created with the traditional SQLite driver. This is because STRING fields are usually created with COLLATE RTRIM - meaning the strings are stored clipped. If they need to be stored full-length (ie if they can include trailing strings) then consider using a CSTRING or PSTRING type instead. If they need to contain binary data, and are longer then the 255 char limit for PSTRING then use the BINARY attribute on a STRING type.

If you are using the traditional SQLite driver then you may notice that it fails to apply the  NOCASE collation, even if the key is set to NOCASE. Using the new driver solves this problem.

Driver Options

See Also Common Driver Options.

Option Default Value Description
/AUTOINC=true false If set to true that changes the automatic ROWID assignment algorithm to prevent the reuse of row ids over the lifetime of the database. In other words, the purpose of AUTOINC is to prevent the reuse of row id values from previously deleted rows.  The AUTOINC setting imposes extra CPU, memory, disk space, and disk I/O overhead and should be avoided if not strictly needed. It is usually not needed.
/BUSYTIMEOUT=n 5000 Sets the SQlite Busy Timeout for the connection. Value is in milliseconds.
/NOPRIMARY=true false Adds WITHOUT ROWID to the CREATE statement
/PRAGMA
see Pragma.
/WAL
Equivalent to /pragma_journal_mode=WAL /pragma_synchronous=NORMAL
Note that WAL mode is persistent for a database - if set once it is in force for all future accesses of the database.
However pragma synchronous is set on a connection level basis, so is a good candidate for the file driver string.
/WHERE
Used with SEND. Same as setting {prop:Where}.

prop:Explain

The output from the prop:Explain property is send to DebugView++ (or equivalent).

Field Options

The driver supports Extended Name Attributes for fields.

To set a field to a specific data type, unique to this driver, use SQLite2Type(whatever) where whatever is a data type unique to SQLite.
This will override the type set using SQLType(something) if it exists.

Example;

paid         Byte,name('paid | sqltype(tinyint) | SQLite2Type(Integer)')

FLUSH

If the database is in WAL mode, then a FLUSH commits (as much as is possible without disturbing other processes) the journal file. It is not necessary to call this;  the journal will be committed automatically when it gets large, but manually committing (ie calling FLUSH) after a lot of writes can marginally improve read performance.

PRAGMA

SQLite allows the program to control many aspects of how the database behaves. These settings, known in SQLite as PRAGMAs can be set by the application. Some are transitory (they only apply to the connection and have to be re-applied) others are permanent and apply to the database itself. Some care should be taken when using PRAGMAs and consultation with the SQLite documentation (https://www.sqlite.org/pragma.html) is recommended.

The SEND command can be used to set pragmas, and also to retrieve the current pragma values. They can also be set using the Driver Options String. To identify that it is a pragma append the prefix PRAGMA_ to the option being set.

Example

jsl = SEND(customers,'/PRAGMA_journal_size_limit=1000')

jsl = SEND(customers,'/PRAGMA_journal_size_limit')

Properties

Property Comment
prop:Hdbc Returns the current connection handle. As used by the sqlite3 API.

SEARCH

The SQLite2 driver supports the SEARCH command. Internally it makes use of the SQLite extension known as FTS5. All the necessary overhead to support this feature in the database will be created automatically by the driver as needed. Triggers are automatically added to the table to ensure that the FTS indexes are maintained.

Write Ahead Logging (WAL)

IF (and it's a BIG IF) all the programs that access the SQLite database file are on the same PC, THEN the program can make use of Write Ahead Logging (known as WAL[1]). While WAL is dramatically faster than other journal modes, it cannot be used when multiple processes on different hardware (or virtual machines) are accessing the file. There are also some (less problematic) disadvantages, so be sure to check the SQLite documentation before turning on WAL.

Aside: Using SQLite on a non-local drive is inherently risky, and not recommended by the SQLite authors.
See SQLite Over a Network, Caveats and Considerations for more.

Note that the journal_mode is a persistent setting. It can only be set if no other processes are currently connected to the database. Once set it will be remembered, and it will remain in force for future connections to this database by this, and other processes. When switching out of WAL mode (if you decide to do so), a call to PACK is recommended to clean up the file.

To activate WAL use the SEND command. It can also be set using the Driver Options String.

glo:Owner = 'data.sqlite'
mode = SEND(customers,'/WAL')


This can be called after the owner attribute for the file (customers in this case) has been set, but before the file is opened.

This is a database-wide setting. You can set if using any table in the database, however the setting then applies to every table in the database, and, as mentioned earlier, it does not need to be reset. It is physically stored in the database file itself.

Note 1: SQLite docs for WAL : https://www.sqlite.org/wal.html
Note 2: SQLite Docs for Journal Modes : https://www.sqlite.org/pragma.html#pragma_journal_mode

Tip: If WAL is not appropriate for your program, then the recommended journal mode is TRUNCATE. It can be set in code using the SEND command.

InMemory2 Driver


The InMemory 2 driver is based on the SQLite2 driver, and makes use of SQLite's In Memory functionality. As such it is a SQL driver, and all SQL functionality is available. This section of the document serves to cover any explicit difference to the traditional MEMORY driver, as well as any difference to the SQLite2 driver.

Headline Differences

Code Traditional MEMORY SQLite2 InMemory2 Behavior
OWNER Not needed Required internally If not supplied the name memdb will be used.

Driver Options

All the SQLite2 Driver Options are applicable. The ones listed below apply only to the Memory2 Driver

Option Default Value Description
/THREADEDCONTENT=true False If this is set to true then each thread has it's own "database". Data is not shared across threads. This is in contrast to the normal behavior which is that the same data is available and visible to all threads.

Field Options

The driver supports Extended Name Attributes for fields.

To set a field to a specific data type, unique to this driver, use InMem2Type(whatever) where whatever is a data type unique to SQLite.
This will override the type set using SQLType(something) if it exists.

Example;

paid         Byte,name('paid | sqltype(tinyint) | InMem2Type(Integer)')

Release History

Version 1.00 - April 25, 2025