Great article! I use your AWS and Azure examples all the time with HealthShare and TrakCare deployments!!! This GCP example will surely be beneficial as well!
Message delivery is dependent on queuing, so if messages on the compute node(s) haven't completed through the production - especially on the outbound send to the Message Bank. The Message Bank will only "bank" those messages that have been actually sent to it. If there are still messages queued in the production, they will remain there in the production queues until those pods are restarted or PreStop hooks are used to allow the POD to have a grace period on container shutdown until all queues are empty. An Interoperability Production is a stateful set, and the queues are required to support message delivery guarantees.
We make available a large number of metrics from within the IRIS instance via REST API. The REST APIs can be used to integrate with Azure Monitor or any other 3rd party monitoring solution that supports REST. The exact metrics to use will be largely dependent on your application along with specific threshold values.
As a starting point, I would suggest the following as a minimum:
cpu_usage
db_freespace
db_latency
glo_ref_per_sec
glo_update_per_sec
jrn_block_per_sec
license_percent_used
phys_mem_percent_used
phys_reads_per_sec
phys_writes_per_sec
process_count
system_alerts_new
wd_cycle_time
The metrics collected are agnostic to running in Azure, AWS, or on-prem, so they are useful in any deployment scenario. Here's a link to the all the standard available metrics and their descriptions within IRIS:
We have very strict Hardware Compatibility List (HCL) for HealthShare and TrakCare that we provide for our preferred solutions based on vendor benchmark testing and live sites. We encourage all our HealthShare and TrakCare customers to stick to the HCL to ensure predictable performance and the high availability customers would expect from our software.
In regards to HPe Synergy 480 blades, these are just traditional blade servers and nothing really all that different as long as they are using one of our recommended processors and have the network/storage adapters to all-flash SAN storage. When getting into hyper converged infrastructure (HCI) solutions, the storage architecture and management is the key factor (potentially pain point) because some HCI solutions are OK and others not so much.
Specific to you questions about why you cannot achieve 200MB/s, there are some specific physics/physical reasons why this is the case. Firstly, your file copy is a completely different IO operation - it's performed at larger block size requests and 100% sequential in operation benefiting from file cache and/or storage controller cache along with NTFS read-ahead prediction.
In a Caché SQL query, Caché (or IRIS) will do 8KB block reads and presumably random in nature as well depending on the query and the data/global structure, so any caching will be mostly limited to whatever you have defined for database cache (global buffers) in the Caché instance. Since this is 5.0.21, I wouldn't expect your installation to have hundreds of GBs of global buffers (and I would not recommend that on 5.0.21 either), so you are at the mercy of disk latency of a single process doing random 8KB reads and not total throughput you see in a file copy operation.
So, based on ~20MB/sec you are seeing, this indicates you are getting about 2500 8KB IOPS or .4ms single process storage latency - this is actually very good performance for a single process. As you add more jobs in parallel you start approaching other limits in the IO chain such as SCSI queue depths at the VM layer, at the VMware ESXi layer, etc... and its more a IO operation limitation than a throughput (MB/s) limitation.
I hope this helps explain the situation you are seeing, and expected behavior because the ~20MB/s you see is just a factor of storage latency for a single process (.4ms) so that's a max IOPS per second (~2500) * 8KB IO size = ~20MB/sec
They used to be available on our website, but have since been removed since the results where from 3 years ago. The summary results from 2015 and 2017 have been included in graph-1 above in this new report for comparison. Thanks.
Correct. Gold 6252 series (aka "Cascade Lake") supports both DCPMM and DRAM. However, keep in mind that when using DCPMM you need to have DRAM and should adhere to at least a 8:1 ratio of DCPMM:DRAM.
1- On small scale I would stay with traditional DRAM. DCPMM becomes beneficial when >1TB of capacity.
2- That was DDR4 DRAM memory in both read-intensive and write-intensive Server #1 configurations. In the read-intensive server configuration it was specifically DDR-2400, and in the write-intensive server configuration it was DDR-2600.
3- There are different CPUs in configuration in the read-intensive workload because this testing is meant to demonstrate upgrade paths from older servers to new technologies and the scalability increases offered in that scenario. The write-intensive workload only used a different server in the first test to compare previous generation to the current generation with DCPMM. Then the three following results demonstrated the differences in performance within the same server - just different DCPMM configurations.
4- Thanks. I will see what happened to the link and correct.
Please note that these scripts are also usable with IRIS. In each of the 'pre' and 'post' scripts you only need to change each of the "csession <CACHE INSTANCE> ..." references to "iris <IRIS INSTANCE> ..."
This is certainly a good option as well, however there is still some risk associated with that in case there are actual issues with backup/snapshot and you actually want failover to occur. This is a good example showing that there are numerous options available.
Using Veeam backup/snapshot is very common with Caché and IRIS, and when using the snapshot process there are a couple things to be aware of:
1. Make sure you are NOT including the VM's memory state as this will have a long impact to VM stun times.
2. Make sure you are current with VMware vSphere patches as there are some known issues with snapshot performance and data consistency in older versions of vSphere. I would recommend being on at least vSphere 6.7 or above.
3. You need to make sure your journal disk is on a different VMDK than any of your CACHE.DATs and CACHE.WIJ especially after you the thaw the instance because a large burst of writes may happen and cause IO to flood/serialize the device and potentially block or slow down journal writes (...and triggers a premature mirror failover because of it).
4. You definitely need to use the ExternalFreeze/Thaw APIs to ensure the CACHE.DATs within the snapshot are "clean".
5. Confirm your current Q0S timeout value as some earlier versions of Caché had a very low QoS value and with snapshots I believe it should be 8 set to seconds and not to exceed 30 seconds.
Also the links that Peter mentioned are very good links to reference as well for more details.
I can help with your question. The reason this is the way it is because you can't (or at least shouldn't) have a database file (CACHE.DAT or IRIS.DAT) opened in contending modes (open both as unbuffered and buffered) to avoid file corruption or stale data. Now the actual writing of the online backup CBK file can be a buffered write because it is independent of the DB as you mentioned, but the actual reads of the database blocks from the online backup utility will be unbuffered direct IO reads. This is where the slow-down may occur: from the reading the database blocks and not the actual writing of the CBK backup file.
Have you looked at using the Ensemble Enterprise Monitor? This provides a centralized "pane of glass" for a dashboard type display across multiple production. Details of using it can be found here in the Ensemble documentation.
We are actively working with Nutanix on a potential example reference architecture, but nothing imminent at this time. The challenges with HCI solutions, Nutanix being one of them, is there is more to the solution that just the nodes themselves. The network topology and switches play a very important role.
Additionally, performance with HCI solutions are good...until they aren't. What I mean by that is performance can be good with HCI/SDDC solutions, however maintaining the expected performance during node failures and/or maintenance periods is the key. Not all SSDs are created equal, so consideration of storage access performance during all situations such as normal operations, failure conditions, and node rebuild/rebalancing is important. Also data locality plays a large role too with HCI, and in some HCI solution so does the working dataset size (ie - the larger the data set and random access patterns to that data can have an adverse and unexpected impact on storage latency).
Here's a link to an article I authored regarding our current experiences and general recommendations with HCI and SDDC-based solutions.
So, in general, be careful when considering any HCI/SDDC solution to not fall into the HCI marketing hype or promises of being "low cost". Be sure to consider failure/rebuild scenarios when sizing you HCI cluster. Many times the often quoted "4-node cluster" just isn't ideal and more nodes may be necessary to support performance during failure/maintenance situations within a cluster. We have come across many of these situations, so test test test. :)
Great question. Yes, NetBackup is widely used by many of our customers, and the approach of using the ExternalFreeze/Thaw APIs is the best approach. Also with you environment being on VMware ESXi 6, we also can support using VMDK snapshots as part of the backup process assume you have the feature in NetBackup 8.1 to support VMware guest snapshots. I found the following link from NetBackup 8.1 and their support in a Vmware environment: https://www.veritas.com/content/support/en_US/doc/NB_70_80_VE
We are working on a similar utility for writes now to support either a solely write or a mixed read/write workload. I hope to have it posted to the community in the next few weeks.
Thank you for your post. We provide a storage performance utility called RANREAD. This will actually use HealthShare/Ensemble (also Caché and InterSystems IRIS) to generate the workload rather than relying on an external tool to trying to simulate what HealthShare/Ensemble might be. You can find the details here in this community article here.
go to post
Great article! I use your AWS and Azure examples all the time with HealthShare and TrakCare deployments!!! This GCP example will surely be beneficial as well!
go to post
Transparent HugePages (THP) are not the same as standard HugePages, and this is especially important for IRIS and its shared memory segment. THP do not handle shared memory segments. Please see my article that discusses this in detail: https://community.intersystems.com/post/linux-transparent-hugepages-and-impact-intersystems-iris
go to post
Message delivery is dependent on queuing, so if messages on the compute node(s) haven't completed through the production - especially on the outbound send to the Message Bank. The Message Bank will only "bank" those messages that have been actually sent to it. If there are still messages queued in the production, they will remain there in the production queues until those pods are restarted or PreStop hooks are used to allow the POD to have a grace period on container shutdown until all queues are empty. An Interoperability Production is a stateful set, and the queues are required to support message delivery guarantees.
go to post
Hi David,
We make available a large number of metrics from within the IRIS instance via REST API. The REST APIs can be used to integrate with Azure Monitor or any other 3rd party monitoring solution that supports REST. The exact metrics to use will be largely dependent on your application along with specific threshold values.
As a starting point, I would suggest the following as a minimum:
The metrics collected are agnostic to running in Azure, AWS, or on-prem, so they are useful in any deployment scenario. Here's a link to the all the standard available metrics and their descriptions within IRIS:
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_rest#GCM_rest_metrics
You can also create application specific metrics. The details can be found here:
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_rest#GCM_rest_metrics_application
I hope this helps.
Thanks,
Mark B-
go to post
Hi Anzelem,
We have very strict Hardware Compatibility List (HCL) for HealthShare and TrakCare that we provide for our preferred solutions based on vendor benchmark testing and live sites. We encourage all our HealthShare and TrakCare customers to stick to the HCL to ensure predictable performance and the high availability customers would expect from our software.
In regards to HPe Synergy 480 blades, these are just traditional blade servers and nothing really all that different as long as they are using one of our recommended processors and have the network/storage adapters to all-flash SAN storage. When getting into hyper converged infrastructure (HCI) solutions, the storage architecture and management is the key factor (potentially pain point) because some HCI solutions are OK and others not so much.
I hope this helps.
Regards,
Mark B-
go to post
Hi Eriks,
Specific to you questions about why you cannot achieve 200MB/s, there are some specific physics/physical reasons why this is the case. Firstly, your file copy is a completely different IO operation - it's performed at larger block size requests and 100% sequential in operation benefiting from file cache and/or storage controller cache along with NTFS read-ahead prediction.
In a Caché SQL query, Caché (or IRIS) will do 8KB block reads and presumably random in nature as well depending on the query and the data/global structure, so any caching will be mostly limited to whatever you have defined for database cache (global buffers) in the Caché instance. Since this is 5.0.21, I wouldn't expect your installation to have hundreds of GBs of global buffers (and I would not recommend that on 5.0.21 either), so you are at the mercy of disk latency of a single process doing random 8KB reads and not total throughput you see in a file copy operation.
So, based on ~20MB/sec you are seeing, this indicates you are getting about 2500 8KB IOPS or .4ms single process storage latency - this is actually very good performance for a single process. As you add more jobs in parallel you start approaching other limits in the IO chain such as SCSI queue depths at the VM layer, at the VMware ESXi layer, etc... and its more a IO operation limitation than a throughput (MB/s) limitation.
I hope this helps explain the situation you are seeing, and expected behavior because the ~20MB/s you see is just a factor of storage latency for a single process (.4ms) so that's a max IOPS per second (~2500) * 8KB IO size = ~20MB/sec
Kind regards,
Mark B-
go to post
They used to be available on our website, but have since been removed since the results where from 3 years ago. The summary results from 2015 and 2017 have been included in graph-1 above in this new report for comparison. Thanks.
go to post
Correct. Gold 6252 series (aka "Cascade Lake") supports both DCPMM and DRAM. However, keep in mind that when using DCPMM you need to have DRAM and should adhere to at least a 8:1 ratio of DCPMM:DRAM.
go to post
Hi Eduard,
Thanks for you questions.
1- On small scale I would stay with traditional DRAM. DCPMM becomes beneficial when >1TB of capacity.
2- That was DDR4 DRAM memory in both read-intensive and write-intensive Server #1 configurations. In the read-intensive server configuration it was specifically DDR-2400, and in the write-intensive server configuration it was DDR-2600.
3- There are different CPUs in configuration in the read-intensive workload because this testing is meant to demonstrate upgrade paths from older servers to new technologies and the scalability increases offered in that scenario. The write-intensive workload only used a different server in the first test to compare previous generation to the current generation with DCPMM. Then the three following results demonstrated the differences in performance within the same server - just different DCPMM configurations.
4- Thanks. I will see what happened to the link and correct.
go to post
Hi all,
Please note that these scripts are also usable with IRIS. In each of the 'pre' and 'post' scripts you only need to change each of the "csession <CACHE INSTANCE> ..." references to "iris <IRIS INSTANCE> ..."
Regards,
Mark B-
go to post
This is certainly a good option as well, however there is still some risk associated with that in case there are actual issues with backup/snapshot and you actually want failover to occur. This is a good example showing that there are numerous options available.
go to post
Using Veeam backup/snapshot is very common with Caché and IRIS, and when using the snapshot process there are a couple things to be aware of:
1. Make sure you are NOT including the VM's memory state as this will have a long impact to VM stun times.
2. Make sure you are current with VMware vSphere patches as there are some known issues with snapshot performance and data consistency in older versions of vSphere. I would recommend being on at least vSphere 6.7 or above.
3. You need to make sure your journal disk is on a different VMDK than any of your CACHE.DATs and CACHE.WIJ especially after you the thaw the instance because a large burst of writes may happen and cause IO to flood/serialize the device and potentially block or slow down journal writes (...and triggers a premature mirror failover because of it).
4. You definitely need to use the ExternalFreeze/Thaw APIs to ensure the CACHE.DATs within the snapshot are "clean".
5. Confirm your current Q0S timeout value as some earlier versions of Caché had a very low QoS value and with snapshots I believe it should be 8 set to seconds and not to exceed 30 seconds.
Also the links that Peter mentioned are very good links to reference as well for more details.
go to post
Hi Alexey,
I can help with your question. The reason this is the way it is because you can't (or at least shouldn't) have a database file (CACHE.DAT or IRIS.DAT) opened in contending modes (open both as unbuffered and buffered) to avoid file corruption or stale data. Now the actual writing of the online backup CBK file can be a buffered write because it is independent of the DB as you mentioned, but the actual reads of the database blocks from the online backup utility will be unbuffered direct IO reads. This is where the slow-down may occur: from the reading the database blocks and not the actual writing of the CBK backup file.
Regards,
Mark B-
go to post
For those watching this thread. We have introduced VSS integration starting with version 2018.1. Here is a link to our VSS support announcement.
go to post
Additionally, InterSystems IRIS and IRIS for Health are now available within the AWS marketplace:
https://aws.amazon.com/marketplace/seller-profile?id=6e5272fb-ecd1-4111-8691-e5e24229826f
go to post
Hi Scott,
Have you looked at using the Ensemble Enterprise Monitor? This provides a centralized "pane of glass" for a dashboard type display across multiple production. Details of using it can be found here in the Ensemble documentation.
Regards,
Mark B-
go to post
Hi Ashish,
We are actively working with Nutanix on a potential example reference architecture, but nothing imminent at this time. The challenges with HCI solutions, Nutanix being one of them, is there is more to the solution that just the nodes themselves. The network topology and switches play a very important role.
Additionally, performance with HCI solutions are good...until they aren't. What I mean by that is performance can be good with HCI/SDDC solutions, however maintaining the expected performance during node failures and/or maintenance periods is the key. Not all SSDs are created equal, so consideration of storage access performance during all situations such as normal operations, failure conditions, and node rebuild/rebalancing is important. Also data locality plays a large role too with HCI, and in some HCI solution so does the working dataset size (ie - the larger the data set and random access patterns to that data can have an adverse and unexpected impact on storage latency).
Here's a link to an article I authored regarding our current experiences and general recommendations with HCI and SDDC-based solutions.
https://community.intersystems.com/post/software-defined-data-centers-sddc-and-hyper-converged-infrastructure-hci-–-important
So, in general, be careful when considering any HCI/SDDC solution to not fall into the HCI marketing hype or promises of being "low cost". Be sure to consider failure/rebuild scenarios when sizing you HCI cluster. Many times the often quoted "4-node cluster" just isn't ideal and more nodes may be necessary to support performance during failure/maintenance situations within a cluster. We have come across many of these situations, so test test test. :)
Kind regards,
Mark B
go to post
Hello Ashish,
Great question. Yes, NetBackup is widely used by many of our customers, and the approach of using the ExternalFreeze/Thaw APIs is the best approach. Also with you environment being on VMware ESXi 6, we also can support using VMDK snapshots as part of the backup process assume you have the feature in NetBackup 8.1 to support VMware guest snapshots. I found the following link from NetBackup 8.1 and their support in a Vmware environment: https://www.veritas.com/content/support/en_US/doc/NB_70_80_VE
You will want to have pre/post scripts added to the NetBackup backup job so that the database is frozen prior to taking the snapshot, and then thawed right after the snapshot. Then NetBackup will take a clean backup of the VMDKs providing an application consistent backup. Here is another link to an article on using ExternalFreeze/Thaw in a VMware environment: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-–-vm-backups-and-caché-freezethaw-scripts
I hope this helps. Please let me know if you have any questions.
Regards,
Mark B-
go to post
Hi Jason,
We are working on a similar utility for writes now to support either a solely write or a mixed read/write workload. I hope to have it posted to the community in the next few weeks.
Kind regards,
Mark B-
go to post
Hi Jason,
Thank you for your post. We provide a storage performance utility called RANREAD. This will actually use HealthShare/Ensemble (also Caché and InterSystems IRIS) to generate the workload rather than relying on an external tool to trying to simulate what HealthShare/Ensemble might be. You can find the details here in this community article here.
Kind regards,
Mark B-