Result: Improving performance/stability in MAPI connections

RNEUSCHUL
Posts: 92
Joined: Wed Jan 27, 2010 11:16 pm

Result: Improving performance/stability in MAPI connections

Postby RNEUSCHUL » Sat Apr 10, 2010 12:13 am

We're currently running the latest beta MAPI connector: the users are geographically distributed on a wide variety of ISPs, connecting to a cloud-based ME server which is running on a high performance backbone; however the performance of the MAPI connector on several of the users' machines has varied from entirely unusable to more or less OK.

Because the ME server is not a domain or DC server, and is for all practical purposes only handling email, it requires no WIndows-level [NTLM] user authentication of logins etc, so we had seen no need to enable NetBios capabilities on the server or in free-standing mobile laptops etc.

After doing some reading in the Technet KB I tried an experiment and enabled netbios over TCP at the NIC and opened up the firewall on TCP [not UDP] ports 137-139 and then reset some of the client machines systems to use NetBios over TCP as well.

The general stability and speed of MAPI services - as experienced by users - improved as a consequence.
I don't know for sure [and don't have the time to port/session sniff to track this down] but I do wonder what other "hidden" port/protocol dependencies there may be in the MAPI services.

Needless to say, I'm not happy with the security implications of having these or any other unecessary ports open on an internet-facing server and will probably block them again fairly shortly.

I'd welcome comments from ME about this issue.

jauch
Posts: 33
Joined: Mon Feb 26, 2007 3:23 am

Re: Result: Improving performance/stability in MAPI connections

Postby jauch » Mon Apr 12, 2010 4:41 pm

I too, have seen much improvement on MAPI running clients stability and performance by opening up ports 137-139 on firewall in addition to running netbios over tcpip on both the ME server and romaing client. In addition, i feel huge concerns in security having those ports open on internet facing server as well. Would like to hear from ME Mapi connector developers on this issue, or at least more detail on the interworkings of the connector.

-tim

MailEnable
Site Admin
Posts: 4435
Joined: Tue Jun 25, 2002 3:03 am
Location: Melbourne, Victoria Australia

Re: Result: Improving performance/stability in MAPI connections

Postby MailEnable » Tue Apr 20, 2010 12:03 am

MailEnable's MAPI protocol operates as an extension of IMAP (typically port 143). The server binds and listens for connections from IMAP and MAPI clients (just as any other windows socket service does). The IMAP service does use IOCompletion ports for scalability - although this can be turned off, and is best practice and should not have anything to do with netBIOS.

The client connects to the server (just like an IMAP client does), by connecting to port 143 and accepting a session.

The client allocates a pool of socket connections to the server for issuing commands to the server (to improve client responsiveness).

It also holds a persistent connection for receiving backchannel updates from the server why the client is connected - for new mail and notification processing.

There is nothing special or specific that MailEnable's server / client does with respect to tcp/ip connectivity or netBIOS encapsulation. (Note: there are some socket calls like gethostbyaddr that may utilize netBIOS for resolution but MailEnable does not use such calls, and even if it did, the calls would only be called every time a new socket connection occurs - not when transmitting/receiving data).

I would suggest that if you are receiving improved throughput from clients, you should notice the same throughput improvements against other services (in particular if you are using an IMAP client to MailEnable, or indeed even a web-browser to IIS/Web Server).

Whilst what I post here does not provide much insight into why you are experiencing improved throughput with netBIOS encapsulation, it should provide some assurances that there is nothing specific that MailEnable's services do with respect to netBIOS.
Regards, Andrew

RNEUSCHUL
Posts: 92
Joined: Wed Jan 27, 2010 11:16 pm

Re: Result: Improving performance/stability in MAPI connections

Postby RNEUSCHUL » Tue Apr 20, 2010 3:53 pm

Andrew

Thanks for that detailed response which broadly concurs with my own understanding of MAPI operations and of MAPI coding.

We can confirm from Wireshark outputs [at both ends of a session] that what you say seems to be correct; there is no obvious utilisation of unexpected ports by ME server or the MAPI client, but the fact still remains that turning NetBios off at both ends [and closing down the relevant firewall ports] does appear to degrade stability and performance for some [but not all] users of the MAPI Connector [even when using the latest 1.24c].

This behaviour is an anomaly which I find rather strange, and it's not something I've previously encountered with any other form of MAPI connections - whether to Exchange or to other mail services such as Google, Zimbra or Communigate.

I'm not at all sure how to proceed at this point: we have an open support ticket for a set of related mapi issues, so it's possible there's a connection between them.

Finally, could you please point me at a document that explains the management of the IOCompletion ports: I'd like to test and see if altering this makes any difference to stability and speed for end-users.

Thanks again

Robert

MailEnable
Site Admin
Posts: 4435
Joined: Tue Jun 25, 2002 3:03 am
Location: Melbourne, Victoria Australia

Re: Result: Improving performance/stability in MAPI connections

Postby MailEnable » Wed Apr 21, 2010 5:13 am

Robert,

There is no internal documentation for completion ports with respect to MailEnable, since its not really something that is intended to be turned off or configured. If you wish to experiment however, it can be disabled/enabled accordingly (requires IMAP service restart).

Root: HKEY_LOCAL_MACHINE\SOFTWARE\Mail Enable\Mail Enable\Services\IMAP
Value Name: Use Completion Ports
Value Type: DWORD
Value: 1 Enabled; 0 Disabled

[I really think that herrings don't get any redder than this though :-)]

Earlier versions of MailEnable did not implement IOCP, but it becomes necessary with large concurrency because it is not possible/scalable to have a large number of dedicated threads allocated for client connections. This is because Windows has limitations to the amount of memory available for managing a processes thread pool. As an example, Microsoft services [like IIS, SMTP, etc} will utilize IOCP to manage connected clients (it is unlikely that IIS can be made to not use it). The way it works is that the server creates a bank of worker threads and the worker threads are allocated to client connections as clients make requests to the server. In effect, rather than issuing a dedicated blocking send/recv loop for a client, the operating system takes responsibility for communicating with the client, and signals the server application whenever data is pending or an event occurs. This allows a large number of connections, without requiring a large number of dedicated/blocking socket threads.

I do think that IOCP will have nothing to do with the issue and is a distraction - particularly since there has been heaps of load testing undertaken and the implementation is mainstream. An example IOCP implementation is outlined here: http://www.codeproject.com/KB/IP/iocp.aspx

I checked relevant source code again, and there is no netBIOS specific code implemented within MailEnable. The only possible place is where RPCs are issued to cluster nodes to distribute notifications. This code is only used in a clustered configuration though - and even then it is relys on the underlying RPC bindings - presumably which could utlize NetBIOS if other forms of network resolution are not available - this is digging deep though.

MailEnable (the company) itself uses MAPI over a WAN and performance (comparitive to LAN) only to be impacted by bandwidth or network latency.
Even then, the only time this has a real impact is when a new mailbox with thousands of items is accessed for the first time.

As I see it, there are two broadline possibilities.

1. MailEnable mysteriously has an obscure dependency on netbios that is slowing down transmission.
2. The blocking of netBIOS is interfering with the underlying network itself - perhaps causing non-routed congestion/chatter in some way - hence impacting MailEnable and potentially other network services.
3. There is a specific dependency of netBIOS that is slowing down MAPI services without Outlook itself - ie: perhaps Outlook needs to be tweaked or told not to utilize netbios and it is causing chatter - although wireshark should/would have picked this up.

Given that there is no noticable dialog on those ports showing up in wireshark, combined with the fact that MailEnable does not forseeably itself use netBIOS, I should think that the second is more likely.

It would be interesting to know if IMAP exhibits the same kind of issues as you have been experiencing with MAPI. This could be done by turning on troubleshooting and accessing a fresh mailbox using Outlook Express via IMAP both with and without NetBIOS enabled.

It would also be useful to know the effect of disabling each netBIOS port in turn - since I think they all have different roles and it provide more insight as to what kind of activity is occuring.

If you have ftp server set up on the same server as MailEnable, it would be interesting to know if large transfers are impacted by changing your network configuration.

Perhaps you could breifly indicate what stability/performance problems become apparent when NBT ports are blocked. Specifically, quantifying the degredation you are experiencing - eg: slow-down factor. It is probably best to follow up the issue via your open ticket - because it is better at handling/tracking issues.

Cheers
Regards, Andrew

RNEUSCHUL
Posts: 92
Joined: Wed Jan 27, 2010 11:16 pm

Re: Result: Improving performance/stability in MAPI connections

Postby RNEUSCHUL » Thu Apr 22, 2010 1:17 am

Andrew many thanks for that very detailed reply.

Let me say first of all that I fully concur with your red herring comment: it makes no logical sense to me either, my own comments on NB were simply an observation of effects resulting from my attempts to understand why the mapi connector displayed such different behavioural issues for different users: MS Technet docs are quite clear about the way in which Outlook/Exchange use Netbios and I wanted to determine whether/if the same applied to Outlook/ME - and thus whether blocking of NB might be an indirect cause of the performance issues.

Given a] that at least one other user on this board has reported improved stability/speed with Netbios ports opened at both ends there is clearly _something_ going on, and b] other MAPI connectors on other Outlook profiles on my testbed machines to remote Zimbra, Communigate and Google services work perfectly there is, once again, _something_ going on.

The question is what that something is, and also why WIreshark hasn't detected anything obvious [which could possibly be a result of my misreading Wireshark results].

I concur also with your analysis of the possibilities: they're logical and agree with both my own experience over many years and with observation of how ME and Outlook behave.

I'd already spotted the regsitry entry, but had not wanted to make changes without explicit confirmation from you.

As for the way in which ME itself uses a WAN - let me say that we're not using a WAN in a conventional sense: the ME server is cloud-based, with users connecting from all over the country.

I'm perfectly willing to test sequential opening/closing of NB ports, but I shall do so whilst also running procmon and other tools to see if I can disentangle what's happening on the ME server and on some of the clients.

I have one minor and currently untested suspicion that the symptoms could just possibly be an artefact of MTU settings, outwith our control at the edge [peering] routers/switches between major trunking ISPs - our users are connecting from all over the UK using a diverse set of ISPs. This is more akin to ME as setup by an ISP than it is for a single company. Testing that is going to be more complex than most scenarios, which means it will take time to set up appropriate arrangements with our own ISP. This is however a straw-grasping thought, and one that can be left to last resorts.

I believe my colleague has already outlined the symptoms in more detail in the support ticket so I won't go into much more detail here, save to say that I have already done a small amount of comparative performance and load testing between webmail, imap, mapi and pop and it's fairly clear that mapi/imap do have problems in some situations, even when the "synchronisation at startup" settings are adjusted.

The ftp test is a very good idea: one I shall carry out during a weekend when we have some slack time.

I'll report back shortly.

Robert

MERoland
Posts: 7
Joined: Tue Sep 27, 2011 6:55 am

Re: Result: Improving performance/stability in MAPI connections

Postby MERoland » Mon Sep 05, 2016 10:55 pm

I experience the very same phenomena. If I open the NB ports everything works fine.
If I close them I experience always stuck and slow problems with the MAPI connector.
Is there any possibility (workaround or configuration) that can be done withoout opening the NB ports and still experience adequate MAPI performance with ME?

MERoland
Posts: 7
Joined: Tue Sep 27, 2011 6:55 am

Re: Result: Improving performance/stability in MAPI connections

Postby MERoland » Tue Sep 06, 2016 7:23 am

Sadly I need to correct myself. This NB opening did not solve the problem.
I really cannot use MailEnable with huge mailboxes because MAPI Outlook integration is too slow,
Is there anything that can be done about that - besides not using Oulook and reducing the mailbox size?

MERoland
Posts: 7
Joined: Tue Sep 27, 2011 6:55 am

Re: Result: Improving performance/stability in MAPI connections

Postby MERoland » Tue Sep 06, 2016 9:48 pm

@Mailenable MAPI Development: Could this be one of the reasons for the performance problems? http://stackoverflow.com/questions/22617539/setfilecompletionnotificationmodes-seems-to-not-work-properly
I figure that everything speeds up a littel if I switch Use Completion Ports off - but than it is still too slow.

Who is online

Users browsing this forum: No registered users and 4 guests